url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/21790
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21790/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21790/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21790/events
|
https://github.com/huggingface/transformers/pull/21790
| 1,598,875,386
|
PR_kwDOCUB6oc5Ktrln
| 21,790
|
Fix resume_from_checkpoint for deepspeed [by mosheber]
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
From #21735 without any change, but to launch CI.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21790/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21790",
"html_url": "https://github.com/huggingface/transformers/pull/21790",
"diff_url": "https://github.com/huggingface/transformers/pull/21790.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21790.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21789
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21789/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21789/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21789/events
|
https://github.com/huggingface/transformers/pull/21789
| 1,598,787,421
|
PR_kwDOCUB6oc5KtYLN
| 21,789
|
Fix nn.init.trunc_normal_ call on torch.float16 data
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger @younesbelkada Feel free to merge, it appears I have no merge rights even after approval on transformers."
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
Following https://github.com/huggingface/transformers/pull/20803 that gave the idea, but was still buggy for some cases, for example:
```python
from transformers import ViTForMaskedImageModeling
import torch
model = ViTForMaskedImageModeling.from_pretrained('hf-internal-testing/tiny-random-vit', torch_dtype=torch.float16).to("cuda")
```
still raising `RuntimeError: "erfinv_vml_cpu" not implemented for 'Half'`.
Let me know if you would like me to add tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21789/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21789",
"html_url": "https://github.com/huggingface/transformers/pull/21789",
"diff_url": "https://github.com/huggingface/transformers/pull/21789.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21789.patch",
"merged_at": 1677501089000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21788
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21788/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21788/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21788/events
|
https://github.com/huggingface/transformers/pull/21788
| 1,598,785,112
|
PR_kwDOCUB6oc5KtXsW
| 21,788
|
[SpeechT5] Fix HiFiGAN tests
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,687
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
SpeechT5HiFiGAN tests added in the PR #21702 failed in the CI daily run: https://github.com/huggingface/transformers/actions/runs/4248850253/jobs/7388500325
This PR fixes the torch devices ✅
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21788/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21788",
"html_url": "https://github.com/huggingface/transformers/pull/21788",
"diff_url": "https://github.com/huggingface/transformers/pull/21788.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21788.patch",
"merged_at": 1677254138000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21787
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21787/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21787/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21787/events
|
https://github.com/huggingface/transformers/pull/21787
| 1,598,721,281
|
PR_kwDOCUB6oc5KtJqV
| 21,787
|
Fix PyTorch Perceiver `PerceiverFourierPositionEncoding` with fp16
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger @amyeroberts Feel free to merge, it appears I have no merge rights even after approval on transformers."
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
Passing `torch_dtype=torch.float16` with perceiver is currently broken on main. The error comes from a parameter generated on the fly always with `torch.float32` dtype, hence raising issues later on. Reproduction:
```python
from transformers import AutoImageProcessor, PerceiverForImageClassificationConvProcessing
from PIL import Image
import requests
import torch
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("deepmind/vision-perceiver-conv")
model = PerceiverForImageClassificationConvProcessing.from_pretrained("deepmind/vision-perceiver-conv", torch_dtype=torch.float16).to("cuda")
inputs = image_processor(images=image, return_tensors="pt").pixel_values.to("cuda")
inputs = inputs.to(torch.float16)
outputs = model(inputs=inputs)
logits = outputs.logits
list(logits.shape)
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Raising `RuntimeError: expected scalar type Float but found Half`
This PR fixes the issue. Let me know if you would like me to add tests for this. I'm actually surprised this was not catched in an existing test.
## Who can review?
@amyeroberts @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21787/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21787",
"html_url": "https://github.com/huggingface/transformers/pull/21787",
"diff_url": "https://github.com/huggingface/transformers/pull/21787.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21787.patch",
"merged_at": 1677498237000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21786
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21786/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21786/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21786/events
|
https://github.com/huggingface/transformers/issues/21786
| 1,598,648,284
|
I_kwDOCUB6oc5fSW_c
| 21,786
|
BioGPT Token Classification
|
{
"login": "upjabir",
"id": 40956091,
"node_id": "MDQ6VXNlcjQwOTU2MDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/40956091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/upjabir",
"html_url": "https://github.com/upjabir",
"followers_url": "https://api.github.com/users/upjabir/followers",
"following_url": "https://api.github.com/users/upjabir/following{/other_user}",
"gists_url": "https://api.github.com/users/upjabir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/upjabir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/upjabir/subscriptions",
"organizations_url": "https://api.github.com/users/upjabir/orgs",
"repos_url": "https://api.github.com/users/upjabir/repos",
"events_url": "https://api.github.com/users/upjabir/events{/privacy}",
"received_events_url": "https://api.github.com/users/upjabir/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Yes, feel free to contribute :)",
"sure, Let me work on it",
"> sure, Let me work on it\r\n\r\nAre you working on it? Else I will take it up.",
"@kurchi1205 Am working on it . The problem is that biogpt doesnt have fast tokenizer . Currenlty am in a testing phase"
] | 1,677
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
### Feature request
It would be nice to have this available.
### Motivation
working on biomedical token classification datasets and I would like to try BioGPT with them.
### Your contribution
I could send a PR if you want. I guess it shouldn't be too hard.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21786/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21785
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21785/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21785/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21785/events
|
https://github.com/huggingface/transformers/pull/21785
| 1,598,610,618
|
PR_kwDOCUB6oc5KsxQL
| 21,785
|
Add Pop2Piano
|
{
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @ArthurZucker the implementation is almost ready(also tests) but I feel that the way I implemented this model is not a descent level, thats why I want you to take a look at the `Pop2PianoModel` structure. \r\nJust to be clear with the feature extractor, the `Pop2PianoFeatureExtractor` takes raw_audio as input and generates variable length output(`10, 50000`, `15, 62200`), even if I pad the raw_audio at start, it will still produce different results for different audio files, so I used lists to stack them and then wrapped them through `BatchFeature`. \r\n\r\nPlease don't mind about docs I will change them afterwards \r\n\r\n**EDIT : Please ignore this** ",
"*(Here is the author of pop2piano)* \r\nThank you for doing this PR. It seems that this was implemented by understanding the original code better than me! Please feel free to ask me if there is anything I can check or do.",
"@sweetcocoa Thanks for you comments, HF team has helped me a lot in this integration.",
"For solving the import issues, you have to create a `require_xxx` with the name of the package. Look for example at the [`require_accelerate`](https://github.com/ArthurZucker/transformers/blob/c3a10a5dace55657a639789ad41fb4ded80e96fe/src/transformers/testing_utils.py#L259) in the `testing_utils.py`! 😉 \r\n",
"Hi @ArthurZucker thanks for you comment!\r\nBut I have already created `require_xxx` in `testing_utils.py` regarding `essentia` and `pretty_midi` and also I have used them in `transformers/src/transformers/models/pop2piano/__init__.py`. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21785). All of your documentation changes will be reflected on that endpoint.",
"Hi @ArthurZucker sorry for the delay but the tests are green now! please review it.",
"Hi @ArthurZucker , sorry for the huge delay, I have made most of the changes that you asked. Also there are some changes that I didn't do these are below - \r\n1. I managed to remove dependency on soundfile and torchaudio but not librosa, since raw_audio is used 2 times first time in `extract_rhythm` which takes audio with original sampling_rate and the second time in `single_preprocess` which first upscales/downscales raw_audio to sampling_rate of 22050 and then uses it. And preloading raw_audio with sampling_rate of 22050(not with native sampling_rate) was giving very bad results! I tried to use scipy.resample but since it uses fft it is relatively slow and less accurate.\r\n2. As you suggested to pad the feature_extractor outputs with silence, I tried to do that but I found that different audio files with same length have input_features of different shapes! For example one was having shape of [7, 38, 512] and another one of [6, 42, 512], both were 10s audios. I could pad them and use them in a batch but then I need to keep track of their shapes which would introduce another variable, what do you suggest? Should I leave them or try to pad and keep track of shapes? \r\n\r\nThe [licensing page](https://essentia.upf.edu/licensing_information.html) of `essentia` says - \"Essentia is available under an open license, [Affero GPLv3](http://www.gnu.org/licenses/agpl.html), for non-commercial applications, thus it is possible to test the library before deciding to licence it under a comercial licence.\" I don't know much about licensing so I will leave it upto you to decide what to change in the headings. \r\n\r\n\r\nAlso please forgive me if I missed something, I will change them in the future commit.",
"Hey! Will try to have a look asap",
"Pinging @sanchit-gandhi for a review too! ",
"Hi @ArthurZucker In have made the changes you requested except the resample one (I have mentioned the reason [here](https://github.com/huggingface/transformers/pull/21785#discussion_r1180702087)), let me know if more changes are needed or not.",
"Hi @sanchit-gandhi thanks for your comments! and sorry for the late reply, the batching is not working for these reasons - \r\n\r\n1. **feature_extractor** - The output of the feature extractor varies from music to music! For example - music1.mp3(10 seconds long) can have a feature extractor output of shape - [7, 38, 512] and music2.mp3(also 10 seconds long) can have a feature extractor output of shape - [6, 42, 512] . So if a user tries to process multiple music files in batch it will be very hard to batch them! \r\n- `truncation` - Truncating both of them to say [5, 25, 512] gives pretty bad results! \r\n - `padding` - One way we can overcome this is, we can take the maximum dimensions on each axis and pad them, but then we must unpad them or get their original shapes back, otherwise the the tokenizer won't work! So we can make a variable just to record the shapes of the tensors before padding. We can do this approach if you want.\r\n \r\nI tried other approaches such as `torch.nested.nested_tensor` but there the user wont be able to use `.to(\"cuda\") or .to(\"cpu\")` on feature_extractor outputs because they are not supported!\r\n\r\n2. **model** - The model.generate can take inputs of (dim1, dim2, dim3) but as we have (dim1, dim2, dim3) shapes for each input we may need to use a for loop if we are to support batching! \r\n\r\nAlso should I make a new PR regarding the assert for T5?",
"Hi @sanchit-gandhi, I pushed the modifications, tests which are failing are mostly due to internal errors(Connection to HF hub, TF installation etc).\r\nPlease ignore the all checkpoints as those are temporary. I will change all of them just before the merge.\r\nAlso please tell me if any more modifications are needed or not and also if I have missed any or not.\r\n\r\nAnd in meantime I will make a PR regarding the change of T5 assert to except, I think maybe we should wait until that gets merged and then we will change the blocks to except here too? \r\n\r\n**EDIT** : pushed the change with the T5 modification.",
"pushed the change with the `T5` modification.",
"Also requesting review from @hollance!",
"Hi @hollance, I have made those changes you requested. And Hi @sanchit-gandhi, please review the batching part(except the checkpoint part as we discussed in slack if want to move them to a separate org or not), let me know if more code changes are required or not.\r\nbtw I was automatically removed from slack channel as it says `susnatodhar10@gmail.com’s Workspace’s free trial of Slack Pro has ended. Any channels with people outside your company — such as #pop2piano-integration — have been disconnected.` \r\nIf anymore work is needed such as transferring the files to organization checkpoint, updating the HF Space for Pop2Piano ... please let me know I would be happy to do that! ",
"@susnato \r\n\r\n> btw I was automatically removed from slack channel as it says `susnatodhar10@gmail.com’s Workspace’s free trial of Slack Pro has ended. Any channels with people outside your company — such as #pop2piano-integration — have been disconnected.`\r\n\r\nI added you back as a guest to the pop2piano-integration channel. You should be able to use this using regular Slack (not Pro). Let me know if that doesn't work.\r\n",
"Hi @sanchit-gandhi, I have pushed the new changes.",
"Alright nice! And you've verified that the outputs are the same with/without padding? Requesting review from @ArthurZucker and @hollance to kick-off the last round of reviews :)",
"yes I did check for 3 types - 1. single audio + no_attention_mask, 2.single audio + attention_mask, 3. 2 audios + attention_mask and the outputs were same. Since you just said, I will still add a test for that in the next commit(after last round of reviews). ",
"reviewing right now",
"Hi @ArthurZucker I have pushed the comments. Let me know if any more changes are needed or not. ",
"I'll @sanchit-gandhi review now, before pinging a core maintainer ",
"I have transferred the necessary files to `sweetcocoa/pop2piano` and updated the checkpoints. @sanchit-gandhi ",
"Thanks for updating again, @sanchit-gandhi has his hands full, I'll review this weekend! Sorry for the delay and great work! 🔥 ",
"@ArthurZucker thanks for the quick reply, please don't worry about the delay, and thanks to the HF team for launching very intuitive audio course! :hugs: ",
"Sorry, I missed the review request the first time! Rest assured it's on my list - I haven't forgotten @susnato! Aiming to have you a review either on Sunday or Monday afternoon latest 🙌",
"Hi @sanchit-gandhi, I have pushed the changes that you requested. Also answered the questions in the threads.",
"Cool thanks for the explanations @susnato, all good with me. Could you click \"resolve\" on all the threads that have been completed?",
"Hi @sanchit-gandhi, just resolved all other threads."
] | 1,677
| 1,692
| 1,692
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds Pop2Piano model to HuggingFace.
Fixes #20126
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a [link](https://github.com/huggingface/transformers/issues/20126)
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21785/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21785/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21785",
"html_url": "https://github.com/huggingface/transformers/pull/21785",
"diff_url": "https://github.com/huggingface/transformers/pull/21785.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21785.patch",
"merged_at": 1692632100000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21784
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21784/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21784/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21784/events
|
https://github.com/huggingface/transformers/pull/21784
| 1,598,521,539
|
PR_kwDOCUB6oc5Ksdrg
| 21,784
|
Inheritance-based framework detection
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 (FYI :P)",
"_The documentation is not available anymore as the PR was closed or merged._",
"Now with a test 👍 ",
"@Rocketknight1 good point! I have no clue if it holds for TF2.5 -- going to double check",
"@Rocketknight1 hah, the class is different but it still works! In TF2.4, we have `<class 'tensorflow.python.keras.engine.training.Model'>` in the inheritance tree, so `\"keras.engine.training.Mode\"` is still a match :D\r\n\r\nNo change is needed, but thanks for the shoutout 👌 "
] | 1,677
| 1,677
| 1,677
|
MEMBER
| null |
# What does this PR do?
Related to #21761
Problem: In some functions, we detect the framework of the model class through its name (e.g. if it starts with `TF`). This is a quirk of our library, and users might run into issues due to this hidden behavior. For instance, in the issue linked above, a user created a child class of a TensorFlow model whose name did not start with `TF`, running into exceptions.
Solution: Inheritance-based framework detection :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21784/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21784",
"html_url": "https://github.com/huggingface/transformers/pull/21784",
"diff_url": "https://github.com/huggingface/transformers/pull/21784.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21784.patch",
"merged_at": 1677511916000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21783
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21783/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21783/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21783/events
|
https://github.com/huggingface/transformers/issues/21783
| 1,598,473,440
|
I_kwDOCUB6oc5fRsTg
| 21,783
|
When will transformers consider supporting LoRA?
|
{
"login": "ZihaoW123",
"id": 47708655,
"node_id": "MDQ6VXNlcjQ3NzA4NjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/47708655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZihaoW123",
"html_url": "https://github.com/ZihaoW123",
"followers_url": "https://api.github.com/users/ZihaoW123/followers",
"following_url": "https://api.github.com/users/ZihaoW123/following{/other_user}",
"gists_url": "https://api.github.com/users/ZihaoW123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZihaoW123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZihaoW123/subscriptions",
"organizations_url": "https://api.github.com/users/ZihaoW123/orgs",
"repos_url": "https://api.github.com/users/ZihaoW123/repos",
"events_url": "https://api.github.com/users/ZihaoW123/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZihaoW123/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey, Huggingface released https://github.com/huggingface/peft about two weeks ago, which enables you to use LoRA with transformers :)\r\n",
"Thanks for jumping on this @AhmedIdr . Closing this!"
] | 1,677
| 1,677
| 1,677
|
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21783/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21782
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21782/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21782/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21782/events
|
https://github.com/huggingface/transformers/pull/21782
| 1,598,440,335
|
PR_kwDOCUB6oc5KsLbQ
| 21,782
|
Fix type in gpt2 config docstring
|
{
"login": "WeberJulian",
"id": 17219561,
"node_id": "MDQ6VXNlcjE3MjE5NTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17219561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WeberJulian",
"html_url": "https://github.com/WeberJulian",
"followers_url": "https://api.github.com/users/WeberJulian/followers",
"following_url": "https://api.github.com/users/WeberJulian/following{/other_user}",
"gists_url": "https://api.github.com/users/WeberJulian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WeberJulian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WeberJulian/subscriptions",
"organizations_url": "https://api.github.com/users/WeberJulian/orgs",
"repos_url": "https://api.github.com/users/WeberJulian/repos",
"events_url": "https://api.github.com/users/WeberJulian/events{/privacy}",
"received_events_url": "https://api.github.com/users/WeberJulian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR corrects the type of the field `embd_pdrop` in the docstring of `configuration_gpt2.py`, the field should be a float not an int, like the default value `0.1` suggests.
## Who can review?
Documentation: @sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21782/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21782",
"html_url": "https://github.com/huggingface/transformers/pull/21782",
"diff_url": "https://github.com/huggingface/transformers/pull/21782.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21782.patch",
"merged_at": 1677482360000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21781
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21781/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21781/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21781/events
|
https://github.com/huggingface/transformers/issues/21781
| 1,598,411,055
|
I_kwDOCUB6oc5fRdEv
| 21,781
|
piepline not loading the model
|
{
"login": "deepanshudashora",
"id": 41534031,
"node_id": "MDQ6VXNlcjQxNTM0MDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/41534031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deepanshudashora",
"html_url": "https://github.com/deepanshudashora",
"followers_url": "https://api.github.com/users/deepanshudashora/followers",
"following_url": "https://api.github.com/users/deepanshudashora/following{/other_user}",
"gists_url": "https://api.github.com/users/deepanshudashora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deepanshudashora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deepanshudashora/subscriptions",
"organizations_url": "https://api.github.com/users/deepanshudashora/orgs",
"repos_url": "https://api.github.com/users/deepanshudashora/repos",
"events_url": "https://api.github.com/users/deepanshudashora/events{/privacy}",
"received_events_url": "https://api.github.com/users/deepanshudashora/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"How? This pipeline requires a feature extractor to see the document doesn't it? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
NONE
| null |
### System Info
ubuntu 22.04
### Who can help?
@Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
pipe = pipeline("document-question-answering", model=model,tokenizer=tokenizer)
File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 811, in pipeline
raise Exception(
Exception: Impossible to guess which feature extractor to use. Please provide a PreTrainedFeatureExtractor class or a path/identifier to a pretrained feature extractor.
```
### Expected behavior
SHould be able to load the model
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21781/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21780
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21780/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21780/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21780/events
|
https://github.com/huggingface/transformers/pull/21780
| 1,598,301,486
|
PR_kwDOCUB6oc5KrsqY
| 21,780
|
Replace `-m torch.distributed.run` by `torchrun`
|
{
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21780). All of your documentation changes will be reflected on that endpoint.",
"Just for reference, is there a reason why the previous occurence is deprecated? (not familiar with it!)",
"`torchrun` is equivalent to `python -m torch.distributed.run` while `python -m torch.distributed.launch` is deprecated. I think the reason why it is deprecated is just that `torchrun` does the same but also provides more functionalities.\r\n\r\nI improved the description of this PR accordingly.",
"However `torchrun` has only been available since the release of `torch` 1.10. I guess we want to keep compatibility with some previous versions of `torch` right? @sgugger @ArthurZucker ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,687
| 1,680
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR replaces occurrences of `-m torch.distributed.launch` (deprecated) and `-m torch.distributed.run` (equivalent) by `torchrun`. More information [here](https://pytorch.org/docs/stable/elastic/run.html).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21780/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21780",
"html_url": "https://github.com/huggingface/transformers/pull/21780",
"diff_url": "https://github.com/huggingface/transformers/pull/21780.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21780.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21779
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21779/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21779/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21779/events
|
https://github.com/huggingface/transformers/issues/21779
| 1,598,208,714
|
I_kwDOCUB6oc5fQrrK
| 21,779
|
Add Flax Whisper for audio classification
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hi. It's my first time contributing to open source. I want to tackle this issue. How can I get started?",
"I have contributed to a few good first issues on HF, would like to take this to learn JAX if available!",
"@Potato-Cracker , @yhl48 Are you guys currently working on it? I have a working branch locally with passing tests, but if you guys would like to make PR, that's totally cool too. ",
"@Shubhamai please go ahead with the PR!",
"Uh, looks like a PR is already submitted :smile: , I will see if I can assist the linked PR. ",
"Very cool that there's so much interest in adding Flax models! Great to see that the JAX/Flax community is so active 🙌 Would you guys be interested in finding other PyTorch models to port to JAX/Flax in `transformers`?",
"@sanchit-gandhi I will be happy to contribute to some of them, would be great if you have any suggestions on any particular models!",
"Very cool! You can take a look at the model integration table here: https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx#supported-frameworks\r\n\r\nThere are a bunch of popular models that are supported in PyTorch but not Flax, LLaMa being one of them! This could be a cool model addition if you're interested?",
"I would love to take up LLaMa if it's available.",
"Very cool! What I would suggest doing is starting from the Flax GPT-Neo model (since this is the Flax model most similar to LLaMa) and then adding the new bits in",
"@sanchit-gandhi Would love to take on https://huggingface.co/openai-gpt. I just hope inferencing on my mac works out",
"@sanchit-gandhi .hello, I would like to work on TAPAS."
] | 1,677
| 1,683
| 1,683
|
CONTRIBUTOR
| null |
### Feature request
The PR https://github.com/huggingface/transformers/pull/21754 adds the PyTorch version of WhisperForAudioClassification. It would be great to add the Flax equivalent for cross-library equivalence ♻️
### Motivation
Whisper is an encoder-decoder model for speech recognition. However, we can repurpose the model for other speech tasks, such as audio classification.
Audio classification is the task of mapping from an input speech sequence to a single class prediction. For more details, refer to the task page on the Hub: https://huggingface.co/tasks/audio-classification
For audio classification, we only require a single model output. Thus, we do not need the auto-regressive generation capacities of the Whisper decoder (which is used to generate a sequence of text tokens during speech recognition). Instead, we can just use the Whisper encoder to get hidden states, and add a classification head on top to make class label predictions.
This is analogous to using a Wav2Vec2 model for audio classification: the Wav2Vec2 encoder is used to get hidden states, and a classification head added on top to make class label predictions.
The PR https://github.com/huggingface/transformers/pull/21754 adds the PyTorch version of WhisperForAudioClassification. It required adding a projection layer and classification layer on top of the WhisperEncoder. For more details, refer directly to the pull request.
It would be great to add the Flax equivalent of this model for cross-framework support.
The most difficult part of this PR will be getting the model tester to work. You can see from the PyTorch PR that we require a standalone tester for the audio classification model. This is because the original Whisper model is an encoder-decoder model, but the audio classification model is an encoder-only model. Thus, we require different testing logic.
### Your contribution
Opening this one up to the community! This will be quite a fun JAX/Flax PR! 🚀
If you're interested in tackling this, free to drop a comment in this thread and open a PR when you're ready. More than happy to answer any questions / queries about this integration!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21779/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21778
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21778/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21778/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21778/events
|
https://github.com/huggingface/transformers/issues/21778
| 1,598,198,203
|
I_kwDOCUB6oc5fQpG7
| 21,778
|
Add TensorFlow Wav2Vec2 for sequence classification
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false
| null |
[] |
[
"This feature request is closely related to #21777! Once we have the TF Wav2Vec2 model for sequence classification added, we can copy across the projection layers and classification layers to Whisper in order to add `TFWhisperForAudioClassifcation`. Two birds with one stone ⚡️",
"Hi @sanchit-gandhi I would love to take this up.",
"Very cool @nandwalritik! The first thing to do would be to add the equivalent TensorFlow code for the projection layer and classification layer on top of the base `TFWav2Vec2Model`. Do you want to have a go at adding this in a new PR? Happy to help with any questions / guidance! There's a bit of info as to where the PyTorch code lives in the original post ^",
"Hi @sanchit-gandhi I have added some initial changes in #22073 PR, but while initializing it with pytorch weights \r\n```model_tf = TFWav2Vec2ForSequenceClassification.from_pretrained(\"superb/wav2vec2-base-superb-ks\",from_pt=True)``` like this it gives `Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFWav2Vec2ForSequenceClassification:` can you guide me with this? \r\n* I checked the shapes for `hidden_states` and `pooled_output` in pytorch and tf implementation they both are matching.",
"hi @sanchit-gandhi can you guide me for above error, so that I can make all the required changes and close the PR.",
"Hey, \r\nCan you share the complete stack trace? \r\n>Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFWav2Vec2ForSequenceClassification:\r\n\r\nThe important part of the error is _Some_. Most likely the classification head is not being loaded correctly. \r\n\r\nQuestions: \r\n1. Is it a warning? or is it an error? \r\n2. Did you try running the model after this? \r\n3. Tried using the same model for PyTorch and see if you get the same error.\r\n \r\ncc: @nandwalritik ",
"> Hey, Can you share the complete stack trace?\r\n> \r\n> > Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFWav2Vec2ForSequenceClassification:\r\n> \r\n> The important part of the error is _Some_. Most likely the classification head is not being loaded correctly.\r\n> \r\n> Questions:\r\n> \r\n> 1. Is it a warning? or is it an error?\r\n> 2. Did you try running the model after this?\r\n> 3. Tried using the same model for PyTorch and see if you get the same error.\r\n> \r\n> cc: @nandwalritik\r\n<details>\r\n<summary>Stacktrace</summary>\r\n\r\n```\r\n>>> tf_model = TFWav2Vec2ForSequenceClassification.from_pretrained(\"superb/wav2vec2-base-superb-ks\",from_pt=True)\r\n/home/nandwalritik/nandwalritik/transformers/src/transformers/configuration_utils.py:379: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.\r\n warnings.warn(\r\n\r\nTFWav2Vec2ForSequenceClassification has backpropagation operations that are NOT supported on CPU. If you wish to train/fine-tine this model, you need a GPU or a TPU\r\n\r\nTFWav2Vec2Model has backpropagation operations that are NOT supported on CPU. If you wish to train/fine-tine this model, you need a GPU or a TPU\r\nSome weights of the PyTorch model were not used when initializing the TF 2.0 model TFWav2Vec2ForSequenceClassification: ['wav2vec2.encoder.layers.10.attention.q_proj.weight', 'wav2vec2.encoder.layers.1.attention.k_proj.bias', 'wav2vec2.encoder.layers.1.attention.q_proj.bias', 'wav2vec2.encoder.layers.0.attention.v_proj.bias', 'wav2vec2.encoder.layers.6.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.10.attention.v_proj.weight', 'wav2vec2.encoder.layers.1.attention.out_proj.bias', 'wav2vec2.encoder.layers.0.layer_norm.weight', 'wav2vec2.encoder.layers.3.layer_norm.weight', 'wav2vec2.encoder.layers.10.attention.out_proj.weight', 'wav2vec2.encoder.layers.6.feed_forward.intermediate_dense.bias', 'wav2vec2.feature_extractor.conv_layers.4.conv.weight', 'wav2vec2.encoder.pos_conv_embed.conv.weight_v', 'wav2vec2.encoder.layers.8.attention.out_proj.bias', 'wav2vec2.encoder.layers.9.layer_norm.weight', 'wav2vec2.encoder.layers.0.attention.k_proj.bias', 'wav2vec2.encoder.layers.0.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.11.attention.v_proj.weight', 'wav2vec2.encoder.layers.5.attention.k_proj.weight', 'wav2vec2.encoder.layers.6.final_layer_norm.weight', 'wav2vec2.encoder.layers.9.feed_forward.output_dense.weight', 'wav2vec2.masked_spec_embed', 'wav2vec2.encoder.layers.6.attention.q_proj.weight', 'wav2vec2.encoder.layers.4.attention.v_proj.bias', 'wav2vec2.encoder.layers.11.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.6.attention.q_proj.bias', 'wav2vec2.encoder.layers.0.attention.q_proj.bias', 'wav2vec2.encoder.layers.4.final_layer_norm.weight', 'wav2vec2.encoder.layers.5.attention.k_proj.bias', 'wav2vec2.encoder.layers.7.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.3.attention.k_proj.bias', 'wav2vec2.encoder.layers.8.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.6.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.8.attention.out_proj.weight', 'wav2vec2.encoder.layers.7.attention.out_proj.bias', 'wav2vec2.encoder.layers.8.attention.q_proj.bias', 'wav2vec2.feature_extractor.conv_layers.2.conv.weight', 'wav2vec2.encoder.layers.11.feed_forward.output_dense.weight', 'wav2vec2.encoder.pos_conv_embed.conv.bias', 'wav2vec2.encoder.layers.4.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.11.final_layer_norm.weight', 'wav2vec2.encoder.layers.5.feed_forward.output_dense.bias', 'wav2vec2.feature_projection.projection.weight', 'wav2vec2.encoder.layers.5.attention.v_proj.weight', 'wav2vec2.encoder.layers.10.attention.out_proj.bias', 'wav2vec2.encoder.layers.4.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.9.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.0.attention.k_proj.weight', 'wav2vec2.encoder.layers.7.layer_norm.bias', 'wav2vec2.encoder.layers.1.attention.q_proj.weight', 'wav2vec2.encoder.layers.7.layer_norm.weight', 'wav2vec2.feature_extractor.conv_layers.1.conv.weight', 'wav2vec2.encoder.layers.8.attention.v_proj.bias', 'projector.bias', 'wav2vec2.encoder.layers.2.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.8.attention.q_proj.weight', 'wav2vec2.encoder.layers.8.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.10.attention.k_proj.bias', 'wav2vec2.encoder.layers.4.attention.out_proj.bias', 'wav2vec2.encoder.layers.6.final_layer_norm.bias', 'layer_weights', 'wav2vec2.encoder.layers.1.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.11.attention.k_proj.bias', 'wav2vec2.encoder.layers.7.attention.v_proj.weight', 'wav2vec2.encoder.layers.2.attention.out_proj.bias', 'wav2vec2.encoder.layers.4.attention.out_proj.weight', 'wav2vec2.encoder.layers.0.final_layer_norm.bias', 'wav2vec2.encoder.layers.7.attention.q_proj.weight', 'wav2vec2.encoder.layers.3.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.10.feed_forward.output_dense.weight', 'wav2vec2.feature_projection.layer_norm.bias', 'wav2vec2.encoder.layers.6.attention.k_proj.weight', 'wav2vec2.encoder.layers.7.attention.v_proj.bias', 'wav2vec2.encoder.layers.4.attention.k_proj.bias', 'wav2vec2.encoder.layers.4.layer_norm.weight', 'wav2vec2.encoder.layers.9.attention.q_proj.bias', 'wav2vec2.encoder.layers.4.attention.q_proj.bias', 'wav2vec2.encoder.layers.8.layer_norm.weight', 'wav2vec2.encoder.layers.2.final_layer_norm.weight', 'wav2vec2.feature_projection.projection.bias', 'wav2vec2.encoder.layers.3.final_layer_norm.bias', 'wav2vec2.encoder.layers.8.layer_norm.bias', 'wav2vec2.encoder.layers.7.attention.k_proj.bias', 'wav2vec2.encoder.layers.5.layer_norm.weight', 'wav2vec2.encoder.layers.10.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.6.attention.v_proj.bias', 'wav2vec2.encoder.layers.8.attention.v_proj.weight', 'wav2vec2.encoder.layers.8.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.5.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.1.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.5.attention.out_proj.bias', 'wav2vec2.encoder.layers.10.layer_norm.weight', 'wav2vec2.encoder.layers.8.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.9.attention.q_proj.weight', 'wav2vec2.encoder.layers.5.attention.v_proj.bias', 'wav2vec2.encoder.layers.6.attention.out_proj.weight', 'wav2vec2.encoder.layers.3.attention.k_proj.weight', 'wav2vec2.encoder.layers.11.attention.q_proj.bias', 'wav2vec2.feature_projection.layer_norm.weight', 'wav2vec2.encoder.layers.1.layer_norm.bias', 'wav2vec2.feature_extractor.conv_layers.6.conv.weight', 'wav2vec2.encoder.layers.7.attention.q_proj.bias', 'wav2vec2.encoder.layers.9.attention.k_proj.bias', 'wav2vec2.encoder.layers.3.attention.q_proj.weight', 'wav2vec2.encoder.layers.10.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.3.final_layer_norm.weight', 'wav2vec2.encoder.layers.2.attention.v_proj.weight', 'wav2vec2.encoder.layers.0.attention.out_proj.bias', 'wav2vec2.encoder.layers.3.layer_norm.bias', 'wav2vec2.encoder.layers.6.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.0.attention.out_proj.weight', 'wav2vec2.encoder.layers.4.layer_norm.bias', 'wav2vec2.encoder.layers.5.attention.q_proj.bias', 'wav2vec2.encoder.layers.5.attention.q_proj.weight', 'wav2vec2.encoder.layers.9.final_layer_norm.bias', 'wav2vec2.encoder.layers.5.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.11.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.4.attention.q_proj.weight', 'wav2vec2.encoder.layers.2.attention.out_proj.weight', 'wav2vec2.feature_extractor.conv_layers.3.conv.weight', 'wav2vec2.encoder.layers.5.final_layer_norm.weight', 'wav2vec2.encoder.layers.2.attention.q_proj.bias', 'wav2vec2.encoder.layer_norm.weight', 'wav2vec2.encoder.layers.3.attention.v_proj.bias', 'wav2vec2.encoder.layers.7.final_layer_norm.weight', 'wav2vec2.encoder.layers.6.attention.out_proj.bias', 'wav2vec2.encoder.layers.9.attention.k_proj.weight', 'wav2vec2.encoder.layer_norm.bias', 'wav2vec2.encoder.layers.7.attention.out_proj.weight', 'wav2vec2.encoder.layers.7.feed_forward.intermediate_dense.weight', 'classifier.weight', 'wav2vec2.encoder.layers.1.attention.v_proj.bias', 'wav2vec2.encoder.layers.1.attention.out_proj.weight', 'wav2vec2.encoder.layers.2.attention.q_proj.weight', 'wav2vec2.encoder.layers.11.attention.k_proj.weight', 'wav2vec2.encoder.layers.4.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.7.attention.k_proj.weight', 'wav2vec2.encoder.layers.11.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.8.final_layer_norm.weight', 'wav2vec2.encoder.layers.11.attention.out_proj.weight', 'wav2vec2.encoder.pos_conv_embed.conv.weight_g', 'wav2vec2.encoder.layers.10.final_layer_norm.bias', 'projector.weight', 'wav2vec2.encoder.layers.0.attention.q_proj.weight', 'wav2vec2.encoder.layers.6.attention.v_proj.weight', 'wav2vec2.encoder.layers.11.attention.v_proj.bias', 'wav2vec2.feature_extractor.conv_layers.0.conv.weight', 'wav2vec2.encoder.layers.10.attention.k_proj.weight', 'wav2vec2.encoder.layers.10.feed_forward.output_dense.bias', 'wav2vec2.feature_extractor.conv_layers.0.layer_norm.bias', 'wav2vec2.encoder.layers.2.attention.v_proj.bias', 'wav2vec2.encoder.layers.1.layer_norm.weight', 'wav2vec2.encoder.layers.7.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.1.final_layer_norm.weight', 'wav2vec2.encoder.layers.3.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.4.attention.k_proj.weight', 'wav2vec2.encoder.layers.0.layer_norm.bias', 'wav2vec2.encoder.layers.11.final_layer_norm.bias', 'wav2vec2.encoder.layers.9.attention.out_proj.bias', 'wav2vec2.encoder.layers.8.final_layer_norm.bias', 'wav2vec2.encoder.layers.10.final_layer_norm.weight', 'wav2vec2.encoder.layers.1.final_layer_norm.bias', 'wav2vec2.encoder.layers.1.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.10.attention.v_proj.bias', 'wav2vec2.encoder.layers.3.attention.out_proj.weight', 'wav2vec2.encoder.layers.3.attention.out_proj.bias', 'wav2vec2.encoder.layers.9.attention.v_proj.bias', 'wav2vec2.encoder.layers.4.attention.v_proj.weight', 'wav2vec2.encoder.layers.1.attention.v_proj.weight', 'wav2vec2.encoder.layers.9.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.11.attention.out_proj.bias', 'wav2vec2.encoder.layers.5.final_layer_norm.bias', 'wav2vec2.encoder.layers.5.attention.out_proj.weight', 'wav2vec2.encoder.layers.10.attention.q_proj.bias', 'wav2vec2.encoder.layers.6.layer_norm.bias', 'wav2vec2.encoder.layers.7.final_layer_norm.bias', 'classifier.bias', 'wav2vec2.encoder.layers.0.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.6.attention.k_proj.bias', 'wav2vec2.encoder.layers.5.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.0.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.2.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.1.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.2.attention.k_proj.weight', 'wav2vec2.encoder.layers.2.layer_norm.weight', 'wav2vec2.encoder.layers.3.attention.v_proj.weight', 'wav2vec2.encoder.layers.4.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.0.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.10.layer_norm.bias', 'wav2vec2.encoder.layers.7.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.9.attention.v_proj.weight', 'wav2vec2.encoder.layers.9.final_layer_norm.weight', 'wav2vec2.encoder.layers.11.layer_norm.weight', 'wav2vec2.encoder.layers.2.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.1.attention.k_proj.weight', 'wav2vec2.feature_extractor.conv_layers.5.conv.weight', 'wav2vec2.encoder.layers.2.layer_norm.bias', 'wav2vec2.encoder.layers.2.final_layer_norm.bias', 'wav2vec2.encoder.layers.2.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.3.attention.q_proj.bias', 'wav2vec2.encoder.layers.3.feed_forward.intermediate_dense.bias', 'wav2vec2.feature_extractor.conv_layers.0.layer_norm.weight', 'wav2vec2.encoder.layers.0.attention.v_proj.weight', 'wav2vec2.encoder.layers.2.attention.k_proj.bias', 'wav2vec2.encoder.layers.9.layer_norm.bias', 'wav2vec2.encoder.layers.8.attention.k_proj.bias', 'wav2vec2.encoder.layers.11.attention.q_proj.weight', 'wav2vec2.encoder.layers.4.final_layer_norm.bias', 'wav2vec2.encoder.layers.6.layer_norm.weight', 'wav2vec2.encoder.layers.8.attention.k_proj.weight', 'wav2vec2.encoder.layers.11.layer_norm.bias', 'wav2vec2.encoder.layers.9.attention.out_proj.weight', 'wav2vec2.encoder.layers.0.final_layer_norm.weight', 'wav2vec2.encoder.layers.5.layer_norm.bias', 'wav2vec2.encoder.layers.3.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.9.feed_forward.output_dense.bias']\r\n- This IS expected if you are initializing TFWav2Vec2ForSequenceClassification from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing TFWav2Vec2ForSequenceClassification from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights or buffers of the TF 2.0 model TFWav2Vec2ForSequenceClassification were not initialized from the PyTorch model and are newly initialized: ['tf_wav2_vec2_model_1.wav2vec2.masked_spec_embed', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.0.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.0.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.0.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.1.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.2.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.3.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.4.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.5.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.6.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_projection.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_projection.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.feature_projection.projection.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_projection.projection.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.pos_conv_embed.conv.weight_v', 'tf_wav2_vec2_model_1.wav2vec2.encoder.pos_conv_embed.conv.weight_g', 'tf_wav2_vec2_model_1.wav2vec2.encoder.pos_conv_embed.conv.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.final_layer_norm.bias', 'dense_2.weight', 'dense_2.bias', 'dense_3.weight', 'dense_3.bias', 'Variable']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n</details>\r\n\r\n1. It's a warning.\r\n2. I tried running on sample_inputs same as [here](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForSequenceClassification.forward.example)\r\n```\r\n>>> inputs_tf = feature_extractor(dataset[0][\"audio\"][\"array\"],sampling_rate=sampling_rate,return_tensors=\"tf\")\r\n>>> inputs = feature_extractor(dataset[0][\"audio\"][\"array\"], sampling_rate=sampling_rate, return_tensors=\"pt\")\r\n>>> with torch.no_grad():\r\n... logits = model(**inputs).logits\r\n... \r\n>>> logits = tf_model(**inputs_tf).logits\r\n>>> inputs_tf = feature_extractor(dataset[0][\"audio\"][\"array\"],sampling_rate=sampling_rate,return_tensors=\"tf\")\r\n>>> inputs = feature_extractor(dataset[0][\"audio\"][\"array\"], sampling_rate=sampling_rate, return_tensors=\"pt\")\r\n>>> with torch.no_grad():\r\n... logits = model(**inputs).logits\r\n... \r\n>>> logits_tf = tf_model(**inputs_tf).logits\r\n>>> logits\r\ntensor([[-0.0732, -0.5845, -3.5185, -1.4014, -0.1823, -2.9616, -3.1919, -1.3804,\r\n -1.1895, 0.4006, 6.4601, -6.2880]])\r\n>>> logits_tf\r\n<tf.Tensor: shape=(1, 12), dtype=float32, numpy=\r\narray([[-1.310684 , 0.13441604, 0.6363504 , -0.5188892 , 0.46565807,\r\n -0.25152174, -0.45716044, -0.14784068, 0.176272 , 1.4507922 ,\r\n -1.9966551 , -0.5963241 ]], dtype=float32)>\r\n>>> equal = torch.allclose(logits,torch.tensor(logits_tf.numpy()), rtol=1e-5)\r\n>>> equal\r\nFalse \r\n```\r\n3. Pytorch model doesn't gives any error/warning like that.\r\n\r\n",
"Ok. You have enough to go on here. \r\nThe output is not equal because you're not using all the weights in the pretrained model. \r\n1. The warning states that for some reason some layers were initialized with the pretrained weights and some weren't. \r\n2. This usually happens if the model doesn't match perfectly. \r\n3. If the model has N layers and only the first M match exactly then only the first M will be loaded from the pretrained model. \r\n\r\nSo, print the dimensions of all the layers of both models and verify layer by layer if everything matches perfectly.\r\ncc: @nandwalritik ",
"Thanks for helping out here @vimarshc! Your tips were spot on ✅ @nandwalritik has the PR nearly finished and equality with the PyTorch model"
] | 1,677
| 1,682
| null |
CONTRIBUTOR
| null |
### Feature request
Wav2Vec2 is one of the most popular speech recognition models, used over 2 million times monthly. In the PyTorch modelling code, we have Wav2Vec2 for speech recognition _and_ Wav2Vec2 for audio classification. However, in TensorFlow, we only have Wav2Vec2 for speech recognition. It would be great to add Wav2Vec2 for audio classification to the TensorFlow modelling code for cross-framework equivalence!
### Motivation
The audio classification class for PyTorch Wav2Vec2 lives under `Wav2Vec2ForSequenceClassification`:
https://github.com/huggingface/transformers/blob/13489248fa8f2cda7503628204f8f43b108797a2/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1745
For this feature request, we'll need to port this PyTorch code into TensorFlow to create an equivalent TensorFlow class, `TFWav2Vec2ForSequenceClassification`.
This means adding a projection layer and classification layer on top of the base `TFWav2Vec2Model`. See the PyTorch code for reference:
https://github.com/huggingface/transformers/blob/13489248fa8f2cda7503628204f8f43b108797a2/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1753-L1758
To check our that our implementation is correct, we can do one forward pass of the PyTorch model and a forward pass of the TensorFlow model with the same inputs. If the output logits are to within 1e-5, we know that our TensorFlow model is correct ✅. We can then enable PT-TF cross tests in the modelling file such that these checks are performed by the CI.
### Your contribution
Opening this one up to the community! If you're interested in tackling this, free to drop a comment in this thread and open a PR when you're ready. More than happy to answer any questions / queries about this integration!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21778/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21777
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21777/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21777/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21777/events
|
https://github.com/huggingface/transformers/issues/21777
| 1,598,186,717
|
I_kwDOCUB6oc5fQmTd
| 21,777
|
Add TensorFlow Whisper model for audio classification
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false
| null |
[] |
[
"Hey @sanchit-gandhi, if we're just using the encoder do you think a CTC head could also work, i.e. `WhisperForCTC`?",
"Hey @OllieBroadhurst! I don't think a an encoder-only Whisper model for speech recognition would be super practical since we'd then need an _external_ language model to correct the phonetic errors made by the CTC model. IMO we're better off using the _internal_ language model provided by the decoder in the original encoder-decoder architecture. The encoder-decoder model is trained end-to-end and on all of the Whisper pre-training data, so likely going to be better than any combination of CTC + LM we train ourselves",
"Hello @OllieBroadhurst are you currently working on this? I would love to help out if I can/you need it. Otherwise, I would like to take a look at this issue.",
"Hi @adit299 ! I'm not so you can take it away!",
"Great, will do!"
] | 1,677
| 1,678
| null |
CONTRIBUTOR
| null |
### Feature request
The PR https://github.com/huggingface/transformers/pull/21754 adds the PyTorch version of `WhisperForAudioClassification`. It would be great to add the TensorFlow equivalent.
### Motivation
Whisper is an encoder-decoder model for speech recognition. However, we can repurpose the model for other speech tasks, such as _audio classification_.
Audio classification is the task of mapping from an input speech sequence to a single class prediction. For more details, refer to the task page on the Hub: https://huggingface.co/tasks/audio-classification
For audio classification, we only require a _single_ model output. Thus, we do not need the auto-regressive generation capacities of the Whisper decoder (which is used to generate a _sequence_ of text tokens during speech recognition). Instead, we can just use the Whisper encoder to get hidden states, and add a classification head on top to make class label predictions.
This is analogous to using a Wav2Vec2 model for audio classification: the Wav2Vec2 encoder is used to get hidden states, and a classification head added on top to make class label predictions.
The PR https://github.com/huggingface/transformers/pull/21754 adds the PyTorch version of `WhisperForAudioClassification`. It required adding a projection layer and classification layer on top of the `WhisperEncoder`. For more details, refer directly to the pull request.
It would be great to add the TensorFlow equivalent of this model for cross-framework support.
The most difficult part of this PR will be getting the model tester to work. You can see from the PyTorch PR that we require a standalone tester for the audio classification model. This is because the original Whisper model is an encoder-decoder model, but the audio classification model is an encoder-only model. Thus, we require different testing logic.
### Your contribution
Opening this one up to the community! If you're interested in tackling this, free to drop a comment in this thread and open a PR when you're ready. More than happy to answer any questions / queries about this integration!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21777/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21776
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21776/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21776/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21776/events
|
https://github.com/huggingface/transformers/pull/21776
| 1,598,167,639
|
PR_kwDOCUB6oc5KrPfN
| 21,776
|
Fix flaky test for log level
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for the fix. There is a remaining one test to fix however, which appears after this PR.\r\n\r\n\r\nI am able to reproduce the issue with (deterministically)\r\n```bash\r\npython -m pytest tests/trainer tests/utils\r\n```\r\nbut not with\r\n```bash\r\npython -m pytest tests/trainer\r\n```\r\nor with\r\n```bash\r\npython -m pytest tests/trainer/test_trainer.py -k test_log_level\r\n```\r\nor with the reversed order\r\n```\r\npython -m pytest tests/utils tests/trainer\r\n```\r\n\r\n**With this PR, the `test_log_level` pass with the 1st command mentioned above, but we get** \r\n\r\n```bash\r\nFAILED tests/utils/test_logging.py::HfArgumentParserTest::test_advisory_warnings - AssertionError: '' != 'Testing 1, 2, 3\\n'\r\n+ Testing 1, 2, 3\r\n```\r\nHowever, on this PR again, and with\r\n```bash\r\npython -m pytest tests/utils\r\n```\r\nor even with the reversed order\r\n```bash\r\npython -m pytest tests/utils tests/trainer \r\n```\r\nit pass.",
"The test in question didn't set any log level, so the log level was still at ERROR when running the tests in the sequence you give. I fixed it by resetting the logger at the beginning of the test."
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
This should fix the flakiness of the log level test. If I'm not wrong, the flakiness came from the fact the log level of Transformers can be changed by other tests (for instance lots of Trainer tests change it) and thus assuming it would be warning at the beginning of the test was wrong. Instead we test depending on the actual log level observed, which should fix the issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21776/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21776",
"html_url": "https://github.com/huggingface/transformers/pull/21776",
"diff_url": "https://github.com/huggingface/transformers/pull/21776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21776.patch",
"merged_at": 1677619455000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21775
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21775/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21775/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21775/events
|
https://github.com/huggingface/transformers/pull/21775
| 1,597,970,845
|
PR_kwDOCUB6oc5KqjJ7
| 21,775
|
[FX tracer] Make `concrete_args` from outside available
|
{
"login": "lygztq",
"id": 23189027,
"node_id": "MDQ6VXNlcjIzMTg5MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/23189027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lygztq",
"html_url": "https://github.com/lygztq",
"followers_url": "https://api.github.com/users/lygztq/followers",
"following_url": "https://api.github.com/users/lygztq/following{/other_user}",
"gists_url": "https://api.github.com/users/lygztq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lygztq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lygztq/subscriptions",
"organizations_url": "https://api.github.com/users/lygztq/orgs",
"repos_url": "https://api.github.com/users/lygztq/repos",
"events_url": "https://api.github.com/users/lygztq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lygztq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@michaelbenayoun Could you merge this PR? I have no write access. Thanks"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Current `HFTracer` implementation will replace `concrete_args` with their default values from function signature, this behavior is different from the one in the description of flag `complete_concrete_args_with_inputs_not_in_dummy_inputs`:
> If `True`, and `dummy_inputs` is specified, every argument that `root` can take that is not in `dummy_inputs` **AND NOT IN `concrete_args`** will be added to `concrete_args`, otherwise does nothing.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21775/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21775",
"html_url": "https://github.com/huggingface/transformers/pull/21775",
"diff_url": "https://github.com/huggingface/transformers/pull/21775.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21775.patch",
"merged_at": 1677484677000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21774
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21774/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21774/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21774/events
|
https://github.com/huggingface/transformers/pull/21774
| 1,597,934,179
|
PR_kwDOCUB6oc5KqbIl
| 21,774
|
[WIP] Add Seaformer
|
{
"login": "inderpreetsingh01",
"id": 54892545,
"node_id": "MDQ6VXNlcjU0ODkyNTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/54892545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inderpreetsingh01",
"html_url": "https://github.com/inderpreetsingh01",
"followers_url": "https://api.github.com/users/inderpreetsingh01/followers",
"following_url": "https://api.github.com/users/inderpreetsingh01/following{/other_user}",
"gists_url": "https://api.github.com/users/inderpreetsingh01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/inderpreetsingh01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/inderpreetsingh01/subscriptions",
"organizations_url": "https://api.github.com/users/inderpreetsingh01/orgs",
"repos_url": "https://api.github.com/users/inderpreetsingh01/repos",
"events_url": "https://api.github.com/users/inderpreetsingh01/events{/privacy}",
"received_events_url": "https://api.github.com/users/inderpreetsingh01/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @inderpreetsingh01, SeaFormer is a mobile-friendly semantic segmentation model but I see that the PR is for a language model?\r\n\r\nFor reference, the best way to add a new model is to identify a similar model in the library (I'd say SegFormer in this case), create a new branch and initialize the files with `transformers-cli add-new-model-like`. You can refer to [this page](https://github.com/huggingface/transformers/blob/main/templates/adding_a_new_model/README.md) for more information.",
"Thanks @alaradirik i have done the changes you mentioned (initialized model as SegFormer) in new branch and raised the new PR #21819."
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
<!-- Remove if not applicable -->
# What does this PR do?
Fixes #21668
Seaformer is a two-branch architecture with Squeeze enhanced Axial Transformer.
<br> Initialized as
**tokenizer_type** = Standalone <br> **is_encoder_decoder_model** = False, since its encoder only.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #21668
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@alaradirik thanks for offering help with this PR, please let me know about any changes required.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21774/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21774",
"html_url": "https://github.com/huggingface/transformers/pull/21774",
"diff_url": "https://github.com/huggingface/transformers/pull/21774.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21774.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21773
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21773/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21773/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21773/events
|
https://github.com/huggingface/transformers/pull/21773
| 1,597,764,358
|
PR_kwDOCUB6oc5Kp2nr
| 21,773
|
[ProphetNet] Fix gradient checkpointing bug
|
{
"login": "yhl48",
"id": 25232361,
"node_id": "MDQ6VXNlcjI1MjMyMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/25232361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yhl48",
"html_url": "https://github.com/yhl48",
"followers_url": "https://api.github.com/users/yhl48/followers",
"following_url": "https://api.github.com/users/yhl48/following{/other_user}",
"gists_url": "https://api.github.com/users/yhl48/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yhl48/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yhl48/subscriptions",
"organizations_url": "https://api.github.com/users/yhl48/orgs",
"repos_url": "https://api.github.com/users/yhl48/repos",
"events_url": "https://api.github.com/users/yhl48/events{/privacy}",
"received_events_url": "https://api.github.com/users/yhl48/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"the code quality seems to have failed due to some other files that were not changed, but could anyone please confirm if that is the case?",
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @gante ",
"Hey @yhl48 -- same comment as in [here](https://github.com/huggingface/transformers/pull/21772#issuecomment-1443568457) (and the other PR has to be merged first) :)",
"(#21772 contains the changes here, closing this PR)"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21737
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. -- #21737
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21773/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21773",
"html_url": "https://github.com/huggingface/transformers/pull/21773",
"diff_url": "https://github.com/huggingface/transformers/pull/21773.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21773.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21772
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21772/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21772/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21772/events
|
https://github.com/huggingface/transformers/pull/21772
| 1,597,727,468
|
PR_kwDOCUB6oc5KpvEu
| 21,772
|
[GPT2] Fix gradient checkpointing bug
|
{
"login": "yhl48",
"id": 25232361,
"node_id": "MDQ6VXNlcjI1MjMyMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/25232361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yhl48",
"html_url": "https://github.com/yhl48",
"followers_url": "https://api.github.com/users/yhl48/followers",
"following_url": "https://api.github.com/users/yhl48/following{/other_user}",
"gists_url": "https://api.github.com/users/yhl48/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yhl48/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yhl48/subscriptions",
"organizations_url": "https://api.github.com/users/yhl48/orgs",
"repos_url": "https://api.github.com/users/yhl48/repos",
"events_url": "https://api.github.com/users/yhl48/events{/privacy}",
"received_events_url": "https://api.github.com/users/yhl48/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @gante ",
"Hey @yhl48 👋 to make our CI pass, you'll have to run `make fix-copies` on your `transformers` folder, and then push the code again.\r\n\r\nIn a nutshell, we have a system in place that ensures that the code in a few models stays the synchronized. `make fix-copies` pushes the changes of the canonical models (such as GPT2) into the others :)",
"@yhl48 uhmmm something went wrong. What does your terminal print after running `make fix-copies`?",
"```\r\npython utils/check_copies.py --fix_and_overwrite\r\npython utils/check_table.py --fix_and_overwrite\r\npython utils/check_dummies.py --fix_and_overwrite\r\npython utils/check_task_guides.py --fix_and_overwrite\r\n```",
"I think the file `transformers/src/transformers/models/decision_transformer/modeling_decision_transformer.py` has to be edited as well.\r\n\r\nAlso because I was working on the main branch locally and pushed to a different branch, so that has caused some confusion with #21773. I should probably keep just one of #21772 and #21773, fixing the two models in one PR.",
"@yhl48 since this contains the changes of #21773, I'm going to merge this one and close the other PR :)"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21737
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. -- #21737
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21772/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21772",
"html_url": "https://github.com/huggingface/transformers/pull/21772",
"diff_url": "https://github.com/huggingface/transformers/pull/21772.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21772.patch",
"merged_at": 1677253042000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21771
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21771/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21771/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21771/events
|
https://github.com/huggingface/transformers/pull/21771
| 1,597,654,652
|
PR_kwDOCUB6oc5KpfwR
| 21,771
|
Chunkable token classification pipeline
|
{
"login": "luccailliau",
"id": 74506016,
"node_id": "MDQ6VXNlcjc0NTA2MDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/74506016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luccailliau",
"html_url": "https://github.com/luccailliau",
"followers_url": "https://api.github.com/users/luccailliau/followers",
"following_url": "https://api.github.com/users/luccailliau/following{/other_user}",
"gists_url": "https://api.github.com/users/luccailliau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luccailliau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luccailliau/subscriptions",
"organizations_url": "https://api.github.com/users/luccailliau/orgs",
"repos_url": "https://api.github.com/users/luccailliau/repos",
"events_url": "https://api.github.com/users/luccailliau/events{/privacy}",
"received_events_url": "https://api.github.com/users/luccailliau/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"For whoever is reading this, I will quickly correct the alignment issues which concerns tokens that are removed due to special tokens mask :)",
"cc @Narsil\r\nI'll let you review and decide if it maybe would be more suitable as code on the hub or needs to be in Transformers.",
"Thank you for this PR. It looks promising.\r\n\r\n> is now able to process sequences longer than 512.\r\n\r\nDo you have a specific model in mind, `512` seems oddly specific.\r\n\r\n> if it maybe would be more suitable as code on the hub or needs to be in Transformers.\r\n\r\nI think it really depends on the complexity of the resulting code, and the cross over with other parameters.\r\n\r\n@LucCailliau the tests aren't passing yet. I don't want to do a full review before the tests are passing.\r\n\r\n\r\nSome notes:\r\n- Existing tests cannot be modified and must be passing\r\n- Overall code should be relatively simple (the current state looks ok on the surface).\r\n- The most complex part (I think it should be the conflict resolution in overlapping parts) must be as clear as possible. It's ok to put it into another function for special handling.\r\n- Unless it causes an explosion in complexity, it should work with all `aggregation_strategy`. The minimum are `NONE` and `SIMPLE`. If it causes and explosion in complexity, we need to forbid the use of the unsupported combinations\r\n- We need good tests for this feature:\r\n - Make sure it actually solves what this PR is set to do (handler longer than `model_max_length` inputs`). (Both a slow and fast test)\r\n - Check with `aggregation_strategy` parameters.\r\n - Check errors if preventing some parameter combo.\r\n \r\nDoes this make sense ?\r\n\r\nFor the sake of getting this moving forward faster I actually suggest splitting this over several PRs.\r\n\r\nThe first PR should be the move to `ChunkPipeline` and nothing else should be modified. Then adding the new parameter. We would need the second one, to be close enough to good state to merge the first (there's no point in changing the pipeline type if we're not able to support this `process_all` parameter correctly.\r\n\r\nDon't hesitate to ask more question should you need it.",
"@Narsil, all tests passed except the code quality. I used black but it doesn't pass. I also update the schema above to explain the algorithm for update/aggregate scores",
"> @Narsil, all tests passed except the code quality. I used black but it doesn't pass. I also update the schema above to explain the algorithm for update/aggregate scores\r\n\r\ntry `pip install -e .[quality] && make fixup` to use the correct black version.",
"@Narsil, wait, just one more thing to add and we can go",
"@Narsil, I checked everything on my side, we can go for it :)",
"@Narsil,\r\n\r\n- I updated the documentation as mentioned\r\n- Updated sanitize_parameters, you just have to provide stride now\r\n- I left unchanged the forward method as we pass only the tokenizer inputs to the model\r\n- Correct spaces for readability\r\n- I also provided an example above between the current implementation with this one\r\n\r\nWhat do you think about it?",
"@Narsil,\r\nFinally, it is better to not update the scores and merge results after entities aggregation.\r\nEach chunk is entirely pass through the model hence and we aggregate the results.\r\n\r\nWith the sentence:\r\n\r\n- [...] Paris Hilton [...]\r\n\r\nWe have the corresponding chunks:\r\n- [...] Paris\r\n- [...] Paris Hilton [...]\r\n- Hilton [...]\r\n\r\nWith the following entities:\r\n\r\n- Paris -> LOC\r\n- Paris Hilton -> PER\r\n- Hilton -> ORG\r\n\r\nThe first step consist of merging results backward and now entities become:\r\n\r\n- Paris -> LOC\r\n- Paris Hilton -> PER\r\n\r\nThe last step, then merging results forward to get the desired entity:\r\n\r\n- Paris Hilton -> PER\r\n\r\nIf we found different entities at the same start/end index, we take the longest one and if lengths are equal, we take the highest score. This approach is clearly better. We passed all the tests.\r\n\r\nI'll start creating a validation set to have results of what we've done.",
"> I'll start creating a validation set to have results of what we've done.\r\n\r\nSounds great ! Again don't hesitate to ask for resources for larger runs.",
"> > I'll start creating a validation set to have results of what we've done.\r\n> \r\n> Sounds great ! Again don't hesitate to ask for resources for larger runs.\r\n\r\nGreat, do you have a specific model and/or dataset to perform our tests? I am also interested for resources.\r\n\r\nThe tests must be done with `aggregation_strategy` to `\"simple\"`, `\"first\"`, `\"average\"` and `\"max\"`. If `None` is set we can have the following:\r\n\"We went to Manhattan Burgers, it was awesome!\"\r\n- \"Manhattan\" -> \"B-LOC\"\r\n- \"Burgers\" -> \"I-ORG\"\r\n\r\nAnd here again we can update scores as we did previously, but this is not perfect.\r\nCorrections can only be applied if `aggregation_strategy` different from `None`.",
"@Narsil, I finished the tests for this pipeline and the results are convincing since it improves the current implementation.\r\n\r\n### A summary of the PR:\r\n\r\nThis PR improve the TokenClassificationPipeline by extending its usage to tokenized texts longer than `model_max_length` by returning overflowing tokens as chunks rather than truncating texts. To enable the use of this extended feature, you must use a fast tokenizer with an aggregation strategy different to `\"none\"` and provide a `stride` number.\r\n\r\n### Approaches:\r\n\r\nDifferent approaches have been experimented: \r\n\r\n1. Updating tokens scores in overlapping parts.\r\n2. Aggregating entities for all chunks, no matter the overlapping parts.\r\n\r\nThe first approach which consist of updating scores with the highest is not the best. A more \"confident\" token doesn't mean the more likely it is, adversarial attacks show that plenty.\r\n\r\nThe second approach (selected approach) consist of processing each chunk and aggregate entities no matter overlapping part. In the final aggregation step, we select the best entity in overlapping parts with a rule. We first look at the longest entity and if entities have the same length, we take the entity with the highest score. **Note that taking the best entity first on its length, then on the highest score (if lengths are equal) give better results than just taking the highest score.**\r\n\r\nExample:\r\nGiven the following entities from their respective chunk:\r\n\"New York\" -> \"LOC\": from chunk no. 1\r\n\"New York City\" -> \"LOC\": from chunk no. 2\r\nThe remaining entity in aggregated entities will be \"New York City\" -> \"LOC\"\r\n\r\n### Results\r\n\r\nIn order to compare the current implementation with this one, we generated labeled text from the conll2003 dataset (available on the hub). Then, compared the number of exact match and wrong match only in the first chunk for each implementation. You can download the notebook as HTML: [token_classification_comparison.zip](https://github.com/huggingface/transformers/files/10892808/token_classification_comparison.zip)\r\n\r\n\r\n\r\nWe have in 378 texts (with more than 1 chunk):\r\n`aggregation_strategy=\"simple\"`\r\n\r\n- **12862** exact matches and **2042** wrong matches for the proposed implementation\r\n- **12739** exact matches and **2181** wrong matches for the current implementation\r\n\r\n`aggregation_strategy=\"first\"`\r\n\r\n- **13083** exact matches and **1415** wrong matches for the proposed implementation\r\n- **12984** exact matches and **1478** wrong matches for the current implementation\r\n\r\n`aggregation_strategy=\"average\"`\r\n\r\n- **13009** exact matchs and **1390** wrong matches for the proposed implementation\r\n- **12921** exact matchs and **1436** wrong matches for the current implementation\r\n\r\n`aggregation_strategy=\"max\"`\r\n\r\n- **13037** exact matches and **1399** wrong matches for the proposed implementation\r\n- **12944** exact matches and **1453** wrong matches for the current implementation\r\n\r\n### Implementation\r\n\r\nWe only changed the implementation from Pipeline to ChunkPipeline. The underlying tests for this pipeline remain the same as functions don't change since the previous implementation. Each chunk is processed individually. Entities are aggregated in a new function called `aggregate_entities`.",
"I haven't forgotten this PR, it seems to have some external interest.\r\n\r\nI wanted to dedicate some good time for a proper review and didn't have a lot. I'm looking at it tomorrow.\r\n\r\nThanks for your work ! ",
"And here are more complete logs so you can inspect a bit more the edgy cases if you want:\r\n\r\nwikiann all languages x top 5 token classification models\r\n\r\nOverall it seems good enough to add to me.",
"> Do you agree with the proposed test for actually checking how it performs on stride conflicts ?\r\n\r\nIt's a good approach too. It makes more sense since concatenate sentences can lead to new randomly generated sentences.\r\nHowever, with this new script, we need to be careful of:\r\n\r\n- most of sentences in datasets fit in a sequence of length 50 (`uninteresting` is incremented when it's the case)\r\n- we take as references outputs of the pipeline without chunking+striding and not the references from the dataset itself\r\n- only one non-detected entity in a sentence with the pipeline chunking+striding will cause an entity offset and lead to wrong results",
"> * There's an optional printing to see the different tokens. I looked at it , and didn't see anything shocking, some spans are a bit different, some group are different and without context it's hard to tell who's correct.\r\n\r\nI noticed the same behavior with the previous tests. In our case, the aim is not to apply corrections on predictions but to aggregate entities. If the model is 100% correct, the aggregated entities will also be 100% correct.",
"> And here are more complete logs so you can inspect a bit more the edgy cases if you want:\r\n\r\nI think you forgot to link the results\r\n\r\n> wikiann all languages x top 5 token classification models\r\n> \r\n> Overall it seems good enough to add to me.\r\n\r\nYes, with the previous tests and with manual tests on different content, it's working as expected.",
"I refactored the code as you mentioned. All is good on my side ",
"Oops: https://gist.github.com/Narsil/e8609805e8e52c7e4114586eede8a481",
"> Oops: https://gist.github.com/Narsil/e8609805e8e52c7e4114586eede8a481\r\n\r\nGood results!",
"@Narsil, I updated the comments in the Files Changed section that give a better explanation of the `aggregate_overlapping_entities()` method. I don't know if you received a notification for that. Is it good for you?",
"I didn't but we're still missing some tests though.\r\n\r\nI offered to help writing them if you want, but we really want tests to show case this feature (and make sure it doesn't break later).\r\n\r\nThe most important will be small tests (with tiny models) so that they run always.\r\nAnd showcase what's happening on simple strings while setting `model_max_length` to something super tiny to force the chunking to happen.",
"I thought it was for @sgugger since you already created a script that shows it works. @Narsil your help is welcome. I have in mind to create specific tests with manually checked references to ensure it doesn't break later.",
"No I pinged him because the code was clean enough to be looked at, but the tests (especially for such a complex feature) are mandatory.\r\n\r\nIf you can get started that would be immensely appreciated, but please share if you're having a hard time or don't have the bandwidth to do it. It shouldn't take me that much time but it's always better if you can write the tests yourself.\r\n",
"Great, I'll do it",
"@Narsil, I finished the tests.\r\nThe selected model is `elastic/distilbert-base-uncased-finetuned-conll03-english` (pytorch_model.bin of size 266MB). It is not a tiny model as you recommend. I tried different tiny models but unfortunately, they don't perform well on our example. In fact, it is not noticeable when running since it requires between 2s and 3s to run all the new tests.\r\n\r\nThe tests are composed in two parts:\r\n\r\n1. `test_chunking()`: Test the pipeline with all aggregation strategies and match an hard coding output\r\n2. `test_regular_chunk()`: Test the pipeline with all aggregation strategies and match output (without scores) between regular output (without chunking+striding) and chunked output (with chunking+striding)\r\n\r\nThe second test, `test_regular_chunk()` can be optional since the first, `test_chunking()` was created with the same rule: regular output match chunked output. But, even if the regular output should not be interpreted as a reference, in this case, it is good to show that we didn't create a flawed test.\r\n\r\nThe selected parameters for these tests are:\r\n- `model_max_length=10`: extremely tiny to increase the difficulty\r\n- `stride=5`: quite large (regarding `model_max_length`) to generate multiple chunks (15 chunks in our example)\r\n\r\nYou can also find below the output without `aggregate_overlapping_entities()` (but sorted by `\"start\"` for more readability) to see how the tests cover the different overlapping cases with the text:\r\n`\"Hugging Face, Inc. is a French company that develops tools for building applications using machine learning. The company, based in New York City was founded in 2016 by French entrepreneurs Clément Delangue, Julien Chaumond, and Thomas Wolf.\"`\r\n \r\n\r\n\r\nAnd below, the same output with `aggregate_overlapping_entities()`:\r\n\r\n\r\n\r\nIs it good for you?",
"Hello @Narsil, I don't know if you didn't receive the update for the tests or if you don't have time to look at it. Do not hesitate if you need additional work to be done",
"> @luccailliau I took the liberty of adding directly the small test I had in mind. and pushing the other 2 tests to slow tests (since they use real models). They are good tests btw !\r\n\r\nGreat! Yes for sure, it is not good to use a real model. I also look at hf-internal-testing model but didn't found hf-internal-testing/tiny-bert-for-token-classification\r\n",
"@Narsil, thank you very much for your help and your time on this PR, it was a pleasure!\r\n@sgugger, I updated your changes, ready to go"
] | 1,677
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
This PR improve the TokenClassificationPipeline by extending its usage to tokenized texts longer than `model_max_length` by returning overflowing tokens as chunks rather than truncating texts. To enable the use of this extended feature, you must use a fast tokenizer with an aggregation strategy different to `"none"` and provide a `stride` number.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21771/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21771/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21771",
"html_url": "https://github.com/huggingface/transformers/pull/21771",
"diff_url": "https://github.com/huggingface/transformers/pull/21771.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21771.patch",
"merged_at": 1679508801000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21770
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21770/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21770/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21770/events
|
https://github.com/huggingface/transformers/pull/21770
| 1,597,488,754
|
PR_kwDOCUB6oc5Ko8W0
| 21,770
|
Support LoRA for clip text encoder in diffusers
|
{
"login": "haofanwang",
"id": 18741068,
"node_id": "MDQ6VXNlcjE4NzQxMDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/18741068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haofanwang",
"html_url": "https://github.com/haofanwang",
"followers_url": "https://api.github.com/users/haofanwang/followers",
"following_url": "https://api.github.com/users/haofanwang/following{/other_user}",
"gists_url": "https://api.github.com/users/haofanwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haofanwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haofanwang/subscriptions",
"organizations_url": "https://api.github.com/users/haofanwang/orgs",
"repos_url": "https://api.github.com/users/haofanwang/repos",
"events_url": "https://api.github.com/users/haofanwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/haofanwang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The support for LoRA should be done using our new [peft](https://github.com/huggingface/peft) library. We won't change Transformers models directly. cc @pacman100 @patrickvonplaten ",
"Sure, it makes sense to me. I'm glad to know. I will directly make a new PR in diffusers."
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
# What does this PR do?
Support a feature in https://github.com/huggingface/diffusers/issues/2469. For now, as stable diffusion uses CLIPTextEncoder, it doesn't support adding LoRA layers yet. What we have done is quite similar to [UNet2DConditionModel](https://github.com/huggingface/diffusers/blob/e5810e686ea4ac499e325c2961808c8972dee039/src/diffusers/models/unet_2d_condition.py#L53).
# What to expect after this PR?
```
import torch
from transformers import CLIPTextModel, CLIPTokenizer
from diffusers.models.cross_attention import LoRACrossAttnProcessor
tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer")
text_encoder = CLIPTextModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="text_encoder")
text_encoder.requires_grad_(False)
# add LoRA layers
lora_attn_procs = {}
for name in text_encoder.attn_processors.keys():
cross_attention_dim = None if name.endswith("self_attn.processor") else text_encoder.config.hidden_size
hidden_size = text_encoder.config.hidden_size
lora_attn_procs[name] = LoRACrossAttnProcessor(
hidden_size=hidden_size, cross_attention_dim=cross_attention_dim
)
text_encoder.set_attn_processor(lora_attn_procs)
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
outputs = text_encoder(**inputs)
# only added LoRA weights require gradients
for name, param in text_encoder.named_parameters():
print(name, param.requires_grad)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21770/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21770",
"html_url": "https://github.com/huggingface/transformers/pull/21770",
"diff_url": "https://github.com/huggingface/transformers/pull/21770.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21770.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21769
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21769/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21769/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21769/events
|
https://github.com/huggingface/transformers/pull/21769
| 1,597,471,577
|
PR_kwDOCUB6oc5Ko4ol
| 21,769
|
[deepspeed tests] fix issues introduced by #21700
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21769). All of your documentation changes will be reflected on that endpoint."
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
https://github.com/huggingface/transformers/pull/21700 changed the default logging level - which broke multiple deepspeed tests.
Applying a hack to restore the log-level to how it was before.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21769/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21769/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21769",
"html_url": "https://github.com/huggingface/transformers/pull/21769",
"diff_url": "https://github.com/huggingface/transformers/pull/21769.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21769.patch",
"merged_at": 1677187346000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21768
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21768/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21768/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21768/events
|
https://github.com/huggingface/transformers/pull/21768
| 1,597,394,214
|
PR_kwDOCUB6oc5Kooep
| 21,768
|
Make schedulers picklable by making lr_lambda fns global
|
{
"login": "connor-henderson",
"id": 78612354,
"node_id": "MDQ6VXNlcjc4NjEyMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-henderson",
"html_url": "https://github.com/connor-henderson",
"followers_url": "https://api.github.com/users/connor-henderson/followers",
"following_url": "https://api.github.com/users/connor-henderson/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions",
"organizations_url": "https://api.github.com/users/connor-henderson/orgs",
"repos_url": "https://api.github.com/users/connor-henderson/repos",
"events_url": "https://api.github.com/users/connor-henderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-henderson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"It may be a good idea to test this feature:\r\n\r\nIn file `tests/optimization/test_optimization.py`:\r\nAdd class\r\n```python\r\nclass LambdaScheduleWrapper:\r\n \"\"\"See https://github.com/huggingface/transformers/issues/21689\"\"\"\r\n def __init__(self, fn):\r\n self.fn = fn\r\n\r\n def __call__(self, *args, **kwargs):\r\n return self.fn(*args, **kwargs)\r\n\r\n @classmethod\r\n def wrap_scheduler(cls, scheduler: LambdaLR):\r\n scheduler.lr_lambdas = list(map(cls, scheduler.lr_lambdas))\r\n```\r\n\r\nAnd wrap the schedulers before testing the reload process in `ScheduleInitTest.test_schedulers`:\r\n```python\r\n ...\r\n scheduler = scheduler_func(self.optimizer, **kwargs)\r\n++ LambdaScheduleWrapper.wrap_scheduler(scheduler) # wrap to test picklability of the schedule\r\n lrs_2 = unwrap_and_save_reload_schedule(scheduler, self.num_steps)\r\n self.assertListEqual(lrs_1, lrs_2, msg=f\"failed for {scheduler_func} in save and reload\")\r\n```\r\n",
"> It may be a good idea to test this feature:\r\n\r\nThank you for filing the issue and sharing this test! I'll leave the decision of whether we should include this test to the maintainers. It may come down to tests being explicitly for functionality as opposed to something like picklability.",
"Yes it would be nice to have a test such as the one above.",
"Thank you for the comments and sorry for the delay",
"Thanks again for your contribution!"
] | 1,677
| 1,679
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Make schedulers picklable by making lr_lambda fns global, at the cost of the extra step of arg passing and using `partial`.
Closes #21689
Implements the change mentioned in the issue across the following functions in optimizations.py:
`get_constant_schedule`
`get_constant_schedule_with_warmup`
`get_linear_schedule_with_warmup`
`get_cosine_schedule_with_warmup`
`get_cosine_with_hard_restarts_schedule_with_warmup`
`get_inverse_sqrt_schedule`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [link](https://github.com/huggingface/transformers/issues/21689)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21768/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21768",
"html_url": "https://github.com/huggingface/transformers/pull/21768",
"diff_url": "https://github.com/huggingface/transformers/pull/21768.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21768.patch",
"merged_at": 1677776923000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21767
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21767/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21767/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21767/events
|
https://github.com/huggingface/transformers/pull/21767
| 1,597,314,566
|
PR_kwDOCUB6oc5KoXR2
| 21,767
|
Fix-ci-whisper
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @ArthurZucker \r\n\r\nI am running on a GCP VM (which is quite close to the CI runner env.), but I still get the same error.\r\nAlso this CI failure is shown up on Feb. 22nd, and at that time, I don't see any change on the Hub reop `openai/whisper-large`.\r\nThe latest change is only 15-17 ago.\r\n\r\nCould you double check the cause and the fix, thank you. Or let me know if I miss something.",
"<img width=\"1615\" alt=\"image\" src=\"https://user-images.githubusercontent.com/48595927/221133297-dd977d13-79f3-48d7-a121-3c25220730fb.png\">\r\nThe tokenizers where properly modified",
"I ran the tests locally and they passed so not really sure what is happening. Also let's add the TF_call fix for doc daily. It is probably related to a TF PR\r\n",
"I got the 3rd id is different in `generated_ids` and `EXPECTED_LOGITS` when running this PR. I will check on the acutal CI runner.\r\n\r\n```bash\r\n(Pdb) generated_ids\r\ntensor([[50258, 50259, 50359, 50363, 2221, 13, 2326, 388, 391, 307,\r\n 264, 50244, 295, 264, 2808, 5359, 293, 321, 366, 5404],\r\n [50258, 50259, 50359, 50363, 6966, 307, 2221, 13, 2326, 388,\r\n 391, 311, 9060, 1570, 1880, 813, 702, 1871, 13, 50257],\r\n [50258, 50259, 50359, 50363, 634, 5112, 505, 300, 412, 341,\r\n 42729, 3196, 295, 264, 1064, 11, 365, 5272, 293, 12904],\r\n [50258, 50259, 50359, 50363, 634, 575, 12525, 22618, 1968, 6144,\r\n 35617, 20084, 1756, 311, 589, 307, 534, 10281, 934, 439]])\r\n(Pdb) EXPECTED_LOGITS\r\ntensor([[50258, 50259, 50358, 50363, 2221, 13, 2326, 388, 391, 307,\r\n 264, 50244, 295, 264, 2808, 5359, 293, 321, 366, 5404],\r\n [50258, 50259, 50358, 50363, 6966, 307, 2221, 13, 2326, 388,\r\n 391, 311, 9060, 1570, 1880, 813, 702, 1871, 13, 50257],\r\n [50258, 50259, 50358, 50363, 634, 5112, 505, 300, 412, 341,\r\n 42729, 3196, 295, 264, 1064, 11, 365, 5272, 293, 12904],\r\n [50258, 50259, 50358, 50363, 634, 575, 12525, 22618, 1968, 6144,\r\n 35617, 20084, 1756, 311, 589, 307, 534, 10281, 934, 439]])\r\n(Pdb)\r\n\r\n```",
"Okay, checking! I was wrong, the task should be `translate` and not `transcribe`. But the default in the generation config is `translate`. So I am not sure I understand, this should not be failing.\r\n\r\nThe docstest is failing because of a recent change in the default arguments handled in the TF generation function see #21580 and #21525",
"doc test and related slow test both pass locally cc @ydshieh ",
"> But the default in the generation config is translate. So I am not sure I understand, this should not be failing.\r\n\r\nwhen I run the test and check the generation config in modeling forward, generation config has no task attribute, and `task` argument passed is also `None`. I think it explains things\r\n\r\n"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Fixes the failing test. It is related to a modificaiton of the configuration on the hub. https://github.com/huggingface/transformers/pull/21307/files#diff-cf6c12f8da48db4d91bcc6db32ecb7c1609a76e30719b5d47cccf595d326d235 already fixed this before, now just enforcing transciption tasks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21767/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21767",
"html_url": "https://github.com/huggingface/transformers/pull/21767",
"diff_url": "https://github.com/huggingface/transformers/pull/21767.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21767.patch",
"merged_at": 1677235165000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21766
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21766/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21766/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21766/events
|
https://github.com/huggingface/transformers/pull/21766
| 1,597,307,108
|
PR_kwDOCUB6oc5KoVsv
| 21,766
|
Add Mega: Moving Average Equipped Gated Attention
|
{
"login": "mnaylor5",
"id": 20518095,
"node_id": "MDQ6VXNlcjIwNTE4MDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/20518095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnaylor5",
"html_url": "https://github.com/mnaylor5",
"followers_url": "https://api.github.com/users/mnaylor5/followers",
"following_url": "https://api.github.com/users/mnaylor5/following{/other_user}",
"gists_url": "https://api.github.com/users/mnaylor5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnaylor5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnaylor5/subscriptions",
"organizations_url": "https://api.github.com/users/mnaylor5/orgs",
"repos_url": "https://api.github.com/users/mnaylor5/repos",
"events_url": "https://api.github.com/users/mnaylor5/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnaylor5/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry for the initial test failures! It should be taken care of now. Also I wanted to point out that I did not have access to a GPU while developing this, so I was not able to test on a GPU",
"Hi @mnaylor5 \r\nThanks for your great work on this! Let us know when do you think this is ready for review 💪 ",
"Thank you @younesbelkada! It is ready for review now 😄 ",
"Hi @younesbelkada / @ArthurZucker - just checking in to see if there is anything you need from me before reviewing this pull request. Looking forward to being able to use Mega in `transformers`!",
"Hey! I'll give you a review tomorrow! Sorry for the wait, had to synch with @younesbelkada on this one",
"Thanks @ArthurZucker, and no worries! 😄 ",
"Thanks for the review @ArthurZucker! I'll reply to individual comments where I can clear things up, and I'll accept your suggestions wherever I can. I'll probably be able to start on the modifications later today, and if not, then early next week.",
"Alright @ArthurZucker this should be good to review again! The biggest updates in this version are removing the `reset_parameters` methods in favor of `_init_weights`, renaming variables/comments to avoid single-letter names, docstring format updates, and renaming `Mega` to `MEGA` based on your suggestion. I have resolved the comments where I made the changes, and left the other comments in place for continued discussion.\r\n\r\nThanks again for your feedback, and I'm happy to answer any questions that arise. Looking forward to getting MEGA into the library! 🚀 ",
"Hi there @ArthurZucker - thanks again for the feedback in your previous review. Just reaching out to see if anything else is needed before reviewing and hopefully merging!",
"Hey! Sorry I must have missed your previous ping! Will review now!",
"Thanks @ArthurZucker! I appreciate the quick review and the encouragement 😄 \r\nI added a couple of questions where things weren't totally clear to me, but I can get started on everything else now. I'm really excited about getting this model into the library, and hopefully there won't be too many more changes required!",
"Will answer to your questions tomorrow! ",
"Alright @ArthurZucker, I think that's everything except the threads with ongoing discussion. I'm super happy with how this is shaping up! In the latest batch of commits:\r\n* Renamed classes, variables, and params based on comments (mainly in EMA and MovingAverageGatedAttention class)\r\n* Rearranged positional bias, normalization functions, activation functions, dropout classes\r\n* Added the `copied from comments` where requested\r\n* Added token type ID buffer\r\n* Added tests for generation and sequence classification\r\n* Moved FFT convolution into a reusable method with additional documentation\r\n* Addressed merge conflicts from LLaMA 🦙 \r\n\r\nThanks for the feedback and I'll wait on any more changes until you get a chance to review the updates and resolve the open discussions. Excited to get up and running with MEGA in `transformers` 🚀 🤗 ",
"@ArthurZucker as an update, it looks like the fix for left-padding is going to be a more significant effort to implement -- the relative bias is applied in the attention function, and it expects all of the inputs to be left-to-right starting at position 0. We can probably refactor to accept the position IDs like they did for CodeGen, but we'll also need to change how the bias is added since it is currently using a single `(seq_len, seq_len)` tensor for the entire batch. Refactoring that might be the heavier lift, but I'm still exploring. \r\n\r\nI'll dig more into this tomorrow, but for the meantime, I've pushed updates that address the rest of your comments! If you have any other suggestions on the fix for relative positions, I'd love to hear them! 😄 ",
"Sure! Also it's not that important to have left padding in this PR, can be added in another PR! ",
"Thanks @ArthurZucker! After digging into it, I do think it will require a pretty significant refactor to support left-padding in this PR. If you're comfortable with it, I agree that it could make sense in a new PR. I just added an entry in the `MegaBlock` docstring for the new `causal_mask` coming from the pretrained model's method, and added a missing `device` for the token type IDs. \r\n\r\nAlso pulled latest changes from `main` to hopefully prevent whatever was causing the tests for exotic models to fail. I'm really happy with how this is looking, so let me know if there's anything else needed to move forward with this PR! Appreciate your comments and guidance on everything so far! :rocket:",
"Awesome, it's alright with me to leave this to another PR. Will do my final review before pinging @sgugger for another pair of eyes! ",
"Thanks again @ArthurZucker and @sgugger! Appreciate the feedback, and it should all be addressed in the latest changes 🤗 ",
"Great working with you @mnaylor5 ! Congrats again on the merge 🔥 ",
"Congrats @mnaylor5 ! Feel free to share on social media and we'll amplify your post",
"Thanks so much @ArthurZucker and @NielsRogge! I learned a ton through this process, and it's so rewarding to see my code in a library I use so much :heart:\r\n\r\nI posted something here on LinkedIn a couple days ago - I'll tag you guys in the comments as well! \r\nhttps://www.linkedin.com/posts/mitchnaylor_mega-activity-7045103140890660864-9VOU"
] | 1,677
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19982
This pull request adds [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655), which is the current leader of the [LRA benchmark](https://paperswithcode.com/sota/long-range-modeling-on-lra). Adapted from the original [fairseq-based repo](https://github.com/facebookresearch/mega) and used a MLM checkpoint I created using the original implementation on the wikitext-103 dataset. There is no proposed Mega tokenizer, so I used the RoBERTa tokenizer which I used on the wikitext checkpoint. The proposed implementation works in encoder and decoder settings, and all relevant tests are passing.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada for text models; tagging @NielsRogge for visibility as he responded to the original issue.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21766/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21766",
"html_url": "https://github.com/huggingface/transformers/pull/21766",
"diff_url": "https://github.com/huggingface/transformers/pull/21766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21766.patch",
"merged_at": 1679660248000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21765
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21765/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21765/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21765/events
|
https://github.com/huggingface/transformers/pull/21765
| 1,597,175,748
|
PR_kwDOCUB6oc5Kn5bP
| 21,765
|
[Flax] Fix erroneous kwargs being passed to generate config
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,687
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Setting the `dtype` with Flax `.from_pretrained` is throwing a `TypeError`:
```python
from transformers import FlaxAutoModelForSpeechSeq2Seq
import jax.numpy as jnp
model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-tiny.en", dtype=getattr(jnp, "float32"))
```
<details>
<summary> Traceback </summary>
```python
File "<string>", line 1, in <module>
File "/Users/sanchitgandhi/transformers/src/transformers/models/auto/auto_factory.py", line 471, in from_pretrained
return model_class.from_pretrained(
File "/Users/sanchitgandhi/transformers/src/transformers/modeling_flax_utils.py", line 955, in from_pretrained
model.generation_config = GenerationConfig.from_pretrained(
File "/Users/sanchitgandhi/transformers/src/transformers/generation/configuration_utils.py", line 539, in from_pretrained
return cls.from_dict(config_dict, **kwargs)
File "/Users/sanchitgandhi/transformers/src/transformers/generation/configuration_utils.py", line 575, in from_dict
logger.info(f"Generate config {config}")
File "/Users/sanchitgandhi/transformers/src/transformers/generation/configuration_utils.py", line 313, in __repr__
return f"{self.__class__.__name__} {self.to_json_string()}"
File "/Users/sanchitgandhi/transformers/src/transformers/generation/configuration_utils.py", line 649, in to_json_string
return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"
File "/opt/homebrew/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/__init__.py", line 234, in dumps
return cls(
File "/opt/homebrew/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/opt/homebrew/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/opt/homebrew/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/opt/homebrew/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/opt/homebrew/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type _ScalarMeta is not JSON serializable
```
</details>
It looks like the dtype arg is erroneously being forwarded to the generation config via `**kwargs`. This line appears to be the culprit, where we point `kwargs` to `model_kwargs`:
https://github.com/huggingface/transformers/blob/0ffa22f9f6662ec9a0b6b6225bf152d32ab3e151/src/transformers/modeling_flax_utils.py#L656
And then append `dtype` to `model_kwargs`:
https://github.com/huggingface/transformers/blob/0ffa22f9f6662ec9a0b6b6225bf152d32ab3e151/src/transformers/modeling_flax_utils.py#L661-L662
The `dtype` then gets silently forwarded to generate config via `**kwargs`:
https://github.com/huggingface/transformers/blob/0ffa22f9f6662ec9a0b6b6225bf152d32ab3e151/src/transformers/modeling_flax_utils.py#L967
This PR simply sets `model_kwargs` as a copy of `kwargs` to avoid this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21765/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21765",
"html_url": "https://github.com/huggingface/transformers/pull/21765",
"diff_url": "https://github.com/huggingface/transformers/pull/21765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21765.patch",
"merged_at": 1677229159000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21764
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21764/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21764/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21764/events
|
https://github.com/huggingface/transformers/pull/21764
| 1,597,161,819
|
PR_kwDOCUB6oc5Kn2dw
| 21,764
|
[Flax Examples] Seq2Seq ASR Fine-Tuning Script
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sanchit-gandhi @andyehrenberg \r\n\r\nWe have made a version of this script will support streaming and training on the TPU pods.\r\n\r\nThe current version of the script is available here:\r\n[https://github.com/NbAiLab/nb-whisper/blob/main/run_flax_speech_recognition_seq2seq_streaming.py](https://github.com/NbAiLab/nb-whisper/blob/main/run_flax_speech_recognition_seq2seq_streaming.py)\r\n\r\nWe are however struggling with a bug at the moment. The script seems to work for training the Tiny models on multiple pod sizes. Both for scaling for speed and for increasing the batch size. All the other model sizes (small, base, medium, large) also works on the single TPU v4-8. However, training on the non-Tiny-model sizes runs for a few steps then freezes. \r\n\r\nIf anyone have any idea about this could be happening, I really appreciate it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Given the popularity of the PyTorch fine-tuning script and Whisper JAX, it's a pretty easy addition adding a Whisper fine-tuning script in JAX/Flax.\r\n\r\nNote: this is largely based off the distil-whisper training script, but simplified to run offline, with just 1 training dataset and the cross-entropy objective https://github.com/huggingface/distil-whisper#training"
] | 1,677
| 1,696
| 1,696
|
CONTRIBUTOR
| null |
# What does this PR do?
Can be used to fine-tune Flax Whisper for speech recognition.
Tested and verified as working with the following (dummy) config:
```
run_flax_speech_recognition_seq2seq.py \
--model_name_or_path openai/whisper-tiny.en \
--dataset_name hf-internal-testing/librispeech_asr_dummy \
--dataset_config clean \
--train_split_name validation \
--eval_split_name validation \
--output_dir whisper-tiny-ft-dummy \
--overwrite_output_dir \
--num_train_epochs=2 \
--max_train_samples 10 \
--max_eval_samples 10 \
--warmup_steps=8 \
--do_train \
--do_eval \
--learning_rate=2e-4 \
--per_device_train_batch_size=2 \
--per_device_eval_batch_size=1 \
--predict_with_generate
```
Will add a README with preliminary training configs / results later this week after doing a full fine-tuning run.
cc @peregilk @andyehrenberg for interest
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21764/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21764/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21764",
"html_url": "https://github.com/huggingface/transformers/pull/21764",
"diff_url": "https://github.com/huggingface/transformers/pull/21764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21764.patch",
"merged_at": 1696002178000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21763
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21763/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21763/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21763/events
|
https://github.com/huggingface/transformers/issues/21763
| 1,597,071,852
|
I_kwDOCUB6oc5fMWHs
| 21,763
|
Ability to specify certificate for running language training modules (e.g. run_mlm.py)
|
{
"login": "codesnail",
"id": 6446354,
"node_id": "MDQ6VXNlcjY0NDYzNTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6446354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codesnail",
"html_url": "https://github.com/codesnail",
"followers_url": "https://api.github.com/users/codesnail/followers",
"following_url": "https://api.github.com/users/codesnail/following{/other_user}",
"gists_url": "https://api.github.com/users/codesnail/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codesnail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codesnail/subscriptions",
"organizations_url": "https://api.github.com/users/codesnail/orgs",
"repos_url": "https://api.github.com/users/codesnail/repos",
"events_url": "https://api.github.com/users/codesnail/events{/privacy}",
"received_events_url": "https://api.github.com/users/codesnail/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Examples are just examples. They can't contain 1,000 arguments to support every user's need without becoming unreadable. In this instance, you should just change the example to suit your need.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
NONE
| null |
### Feature request
Add a command line param in training modules, e.g., run_mlm.py to take in a custom certificate file for the requests module.
### Motivation
When running behind a VPN, the training modules e.g., run_mlm.py script gives SSL errors when trying to connect to URLs, e.g., to download a pre-trained model. It gives an error like this:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /roberta-base/resolve/main/config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)')))
This seems to be caused when the code from a device that is behind a VPN, because the VPN connection uses a custom root cert.
### Your contribution
I was able to resolve the issue by specifying a certificate explicitly in requests.get() in _http.py, as follows:
ca = "company_root_ca_exported_from_my_browser"
then modified the following line:
response = requests.request(method=method, url=url, **kwargs)
to this:
response = requests.request(method=method, url=url, verify=ca, **kwargs)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21763/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21762
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21762/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21762/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21762/events
|
https://github.com/huggingface/transformers/pull/21762
| 1,596,845,536
|
PR_kwDOCUB6oc5KmxrT
| 21,762
|
[time series] updated expected values for integration test.
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Updated the integration test expected values.
cc @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21762/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21762",
"html_url": "https://github.com/huggingface/transformers/pull/21762",
"diff_url": "https://github.com/huggingface/transformers/pull/21762.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21762.patch",
"merged_at": 1677238615000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21761
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21761/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21761/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21761/events
|
https://github.com/huggingface/transformers/issues/21761
| 1,596,841,483
|
I_kwDOCUB6oc5fLd4L
| 21,761
|
AttributeError: type object has no attribute 'forward'
|
{
"login": "MattiaSangermano",
"id": 43407984,
"node_id": "MDQ6VXNlcjQzNDA3OTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/43407984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MattiaSangermano",
"html_url": "https://github.com/MattiaSangermano",
"followers_url": "https://api.github.com/users/MattiaSangermano/followers",
"following_url": "https://api.github.com/users/MattiaSangermano/following{/other_user}",
"gists_url": "https://api.github.com/users/MattiaSangermano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MattiaSangermano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MattiaSangermano/subscriptions",
"organizations_url": "https://api.github.com/users/MattiaSangermano/orgs",
"repos_url": "https://api.github.com/users/MattiaSangermano/repos",
"events_url": "https://api.github.com/users/MattiaSangermano/events{/privacy}",
"received_events_url": "https://api.github.com/users/MattiaSangermano/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! Thanks for posting. \r\nYou meed to re-define the `forward` method. The following script works as expected\r\n```python \r\nfrom transformers import BertConfig, TFBertForSequenceClassification, AutoTokenizer, BertConfig\r\nimport tensorflow as tf\r\n\r\nclass CustomTFModel(TFBertForSequenceClassification):\r\n def __init__(self, config: BertConfig, *inputs, **kwargs):\r\n super().__init__(config, *inputs, **kwargs)\r\n def forward(self, **kwargs):\r\n super().call(**kwargs)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')\r\nconf = BertConfig.from_pretrained('bert-base-uncased')\r\nmodel = CustomTFModel(conf)\r\n#model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')\r\nlabels = tf.convert_to_tensor([0,1,2])\r\n\r\nds = tf.data.Dataset.from_tensor_slices({'input_ids':model.dummy_inputs['input_ids'],'labels': labels}).batch(1)\r\n\r\nmodel.compile(\r\n tf.keras.optimizers.Adam(learning_rate=3e-5),\r\n metrics='accuracy'\r\n )\r\nmodel.fit(\r\n ds,\r\n epochs=1,\r\n shuffle=True,\r\n verbose=1,\r\n)\r\n```\r\n(that is a quick fix, for TF it should be checking for a `call` method).\r\nWe check whether the model name starts with `TF`. So `TFCustomModel` will also work without redefining the function.\r\n```python \r\nclass TFCustomModel(TFBertForSequenceClassification):\r\n def __init__(self, config: BertConfig, *inputs, **kwargs):\r\n super().__init__(config, *inputs, **kwargs)\r\n```\r\nThis probably was not clear so we might need to update the doc @gante ",
"Hey! Thank you very much for your quick reply. Now it is working correctly.\r\n\r\nI was using the quick fix you proposed too, but I noticed that during training the metrics that are passed inside `model.compile` are not evaluated. I don't know if this is a problem on your side or if it is due to tensorflow, I tried to find out what is the problem but I couldn't understand it.",
"Hey all!\r\n\r\nTechnically, this problem __can__ be solved without documentation (which is tricky to sort for this particular problem). If we replace the class name check with an inheritance check (e.g. [here](https://github.com/huggingface/transformers/blob/0ffa22f9f6662ec9a0b6b6225bf152d32ab3e151/src/transformers/utils/generic.py#L392)), any name can be given to downstream classes. \r\n\r\nIn terms of code, `model_name.startswith(\"TF\")` would become `\"transformers.modeling_tf_utils.TFPreTrainedModel\" in str(inspect.getmro(model_class))`.\r\n\r\nHowever, to ensure correctness, we would need to trigger a series of changes, as several places in our codebase rely on class names prefixed with the framework. Since this is my first time seeing this problem, I'll err toward no change. Nevertheless, I wanted to leave this comment here for future reference :)\r\n\r\ncc @sgugger ",
"I think it would be great to use a class check instead, maybe checking if we inherit from a keras model or an nn module?"
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.4.0
- Python version: 3.8.8
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker and @gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```py
from transformers import BertConfig, TFBertForSequenceClassification, AutoTokenizer, BertConfig
import tensorflow as tf
class CustomTFModel(TFBertForSequenceClassification):
def __init__(self, config: BertConfig, *inputs, **kwargs):
super().__init__(config, *inputs, **kwargs)
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
conf = BertConfig.from_pretrained('bert-base-uncased')
model = CustomTFModel(conf)
#model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')
labels = tf.convert_to_tensor([0,1,2])
ds = tf.data.Dataset.from_tensor_slices({'input_ids':model.dummy_inputs['input_ids'],'labels': labels}).batch(1)
model.compile(
tf.keras.optimizers.Adam(learning_rate=3e-5),
metrics='accuracy'
)
model.fit(
ds,
epochs=1,
shuffle=True,
verbose=1,
)
```
### Expected behavior
when I execute this code I get the error:
```bash
File "issue.py", line 21, in <module>
model.fit(
File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 1147, in autograph_handler
raise e.ag_error_metadata.to_exception(e)
AttributeError: in user code:
File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1353, in train_step
label_kwargs = find_labels(self.__class__)
File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/transformers/utils/generic.py", line 309, in find_labels
signature = inspect.signature(model_class.forward)
AttributeError: type object 'CustomTFModel' has no attribute 'forward'
```
I would expect that the `type object 'CustomTFModel' has no attribute 'forward'` error would not be generated as the `CustomTFModel` class should be identical to `TFBertForSequenceClassification` or am I wrong?
if you use `TFBertForSequenceClassification` instead (the commented line) the error is not triggered.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21761/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21760
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21760/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21760/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21760/events
|
https://github.com/huggingface/transformers/issues/21760
| 1,596,727,814
|
I_kwDOCUB6oc5fLCIG
| 21,760
|
ImportError: Blip2ForConditionalGeneration
|
{
"login": "garg-aayush",
"id": 17342823,
"node_id": "MDQ6VXNlcjE3MzQyODIz",
"avatar_url": "https://avatars.githubusercontent.com/u/17342823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garg-aayush",
"html_url": "https://github.com/garg-aayush",
"followers_url": "https://api.github.com/users/garg-aayush/followers",
"following_url": "https://api.github.com/users/garg-aayush/following{/other_user}",
"gists_url": "https://api.github.com/users/garg-aayush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garg-aayush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garg-aayush/subscriptions",
"organizations_url": "https://api.github.com/users/garg-aayush/orgs",
"repos_url": "https://api.github.com/users/garg-aayush/repos",
"events_url": "https://api.github.com/users/garg-aayush/events{/privacy}",
"received_events_url": "https://api.github.com/users/garg-aayush/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, thanks for using BLIP-2!\r\n\r\nIt's Blip2ForConditionalGeneration, not BLIP2ForConditionalGeneration ;)",
"@NielsRogge \r\n\r\nSame issue changing to `Blip2ForConditionalGeneration`\r\n```bash\r\n File \"/home/aayush/ControlNet/blip2_captions_test.py\", line 4, in <module>\r\n from transformers import AutoProcessor, Blip2ForConditionalGeneration\r\nImportError: cannot import name 'Blip2ForConditionalGeneration' from 'transformers' (/home/aayush/miniconda3/envs/diffusers/lib/python3.10/site-packages/transformers/__init__.py))\r\n```",
"@NielsRogge \r\n\r\nI installed `transformers` from source now\r\n```bash\r\ngit clone https://github.com/huggingface/transformers.git\r\ncd transformers\r\npip install -e .\r\n```\r\ninstead of `pip install git+https://github.com/huggingface/transformers` or `pip install transformers`.\r\n\r\nIt works now!\r\n\r\n\r\n",
"Ok, I thought you were already using that since you mentioned transformers version: 4.27.0.dev0.\r\n\r\nFeel free to close this issue if it's resolved :)",
"I installed from the source but still getting the same error. Any update here?\r\n",
"@minarainbow How did you installed from source?\r\n\r\nThis way (worked!)\r\n```\r\ngit clone https://github.com/huggingface/transformers.git\r\ncd transformers\r\npip install -e .\r\n```\r\nor (error)\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n``\r\n\r\n",
"Best is to do:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers.git@main\r\n```",
"What's actually going on here? It's now almost November 2023 and not included? \r\n\r\nhttps://huggingface.co/docs/transformers/v4.28.1/model_doc/blip-2\r\n\r\nand\r\n\r\nhttps://huggingface.co/docs/transformers/v4.34.1/model_doc/blip-2\r\n\r\nboth cite `Blip2ForConditionalGeneration` but for me, these versions cannot import it. \r\n\r\nI install from source\r\n\r\n> `pip install git+https://github.com/huggingface/transformers.git@main`\r\n\r\nAnd still can't import `Blip2ForConditionalGeneration`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"c:\\testing\\blip2.py\", line 1, in <module>\r\n from transformers import AutoProcessor, Blip2ForConditionalGeneration\r\nImportError: cannot import name 'Blip2ForConditionalGeneration' from 'transformers' (C:\\Users\\jorda\\AppData\\Roaming\\Python\\Python310\\site-packages\\transformers\\__init__.py) \r\n```\r\n\r\nThere is documentation citing code that doesn't exist in those versions, and you cannot follow the documentation. ",
"Hey, I cannot reproduce your issue, the following works for me: \r\n```python \r\n>>> from transformers import Blip2ForConditionalGeneration\r\n```\r\nmake sure transformers is correctly installed and that you are using the correct environment \r\n```python \r\n>>> import transformers \r\n>>> print(transformers.__version__)\r\n```",
"> Hey, I cannot reproduce your issue, the following works for me:\r\n> \r\n> ```python\r\n> >>> from transformers import Blip2ForConditionalGeneration\r\n> ```\r\n> \r\n> make sure transformers is correctly installed and that you are using the correct environment\r\n> \r\n> ```python\r\n> >>> import transformers \r\n> >>> print(transformers.__version__)\r\n> ```\r\n\r\nI think issue was how pip handling upgrades, cause installing from source worked (cloning the main repo)"
] | 1,677
| 1,698
| 1,677
|
NONE
| null |
### System Info
I am trying to run Image captioning using `BLIP2` using the steps mentioned in [Link](https://huggingface.co/blog/blip-2#using-blip-2-with-hugging-face-transformers). However, there seems to be import error for `BLIP2ForConditionalGeneration`
`transformers version: 4.27.0.dev0`
Issue
```bash
Traceback (most recent call last):
File "/home/aayush/ControlNet/blip2_captions_test.py", line 4, in <module>
from transformers import AutoProcessor, BLIP2ForConditionalGeneration
ImportError: cannot import name 'Blip2ForConditionalGeneration' from 'transformers' (/home/aayush/miniconda3/envs/diffusers/lib/python3.10/site-packages/transformers/__init__.py)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. pip install transformers
2. Run the following snippet
```
from PIL import Image
from transformers import AutoProcessor, Blip2ForConditionalGeneration
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16)
```
### Expected behavior
It should have started downloading the blip-2 models locally
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21760/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21759
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21759/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21759/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21759/events
|
https://github.com/huggingface/transformers/pull/21759
| 1,596,698,099
|
PR_kwDOCUB6oc5KmRKN
| 21,759
|
Generate - update cookie cutters to not initialize cache with training and gradient checkpointing
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
MEMBER
| null |
# What does this PR do?
Add the changes in #21733 to the cookie-cutter files.
The other modeling files are being tracked in the following "Good First Issue": #21737
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21759/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21759",
"html_url": "https://github.com/huggingface/transformers/pull/21759",
"diff_url": "https://github.com/huggingface/transformers/pull/21759.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21759.patch",
"merged_at": 1677237661000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21758
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21758/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21758/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21758/events
|
https://github.com/huggingface/transformers/pull/21758
| 1,596,626,372
|
PR_kwDOCUB6oc5KmBVQ
| 21,758
|
[WIP] pass kwargs to config
|
{
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21758). All of your documentation changes will be reflected on that endpoint.",
"👍 Anyway, wasn't expecting changing something as fundamental as `.from_pretrained` to be reasonable or easy 😅 \r\n\r\n@sgugger \r\nWhile working on this I realized a couple of things. Will make separate PRs for them if need be\r\n- pruned_heads key values should be checked to be type int before casting + error message. there was also a test that used \"a\" as pruned_head but wasn't failing. will look into why later.\r\n- Some models' configs use `initializer_range` some use `init_std`. \r\nFor example while `FlaubertConfig` doesn't have `initializer_range` but tests in FlaubertModelTester pass `initializer_range` and not `init_std`... These keys don't seem to defined int the `attribute_map` either. So should probably look into those.\r\n\r\nHaving fun figuring out how `from_pretrained` magic works",
"The pruned head fix is a welcome one. As I've said before (and as you can see from all the failing tests), you cannot change the logic inside the pretrained config like this without breaking many things in the library.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/21757
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @Narsil
PretrainedConfig related
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21758/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21758",
"html_url": "https://github.com/huggingface/transformers/pull/21758",
"diff_url": "https://github.com/huggingface/transformers/pull/21758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21758.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21757
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21757/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21757/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21757/events
|
https://github.com/huggingface/transformers/issues/21757
| 1,596,621,453
|
I_kwDOCUB6oc5fKoKN
| 21,757
|
Custom kwargs not passed when extending PretrainedConfig class
|
{
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is intended. Hadding the kwargs is done [here](https://github.com/huggingface/transformers/blob/78a93d17c0e0bca0bc4477e0ee362a95d79f9b22/src/transformers/configuration_utils.py#L712) but to filter out the value that have nothing to do in the config, we detect whether they are attribute of the config or not. So you should change your code like this:\r\n\r\n```py\r\nclass MyClassificationConfig(PretrainedConfig):\r\n def __init__(self, \r\n moo='poo',\r\n boo=5,\r\n **kwargs):\r\n print(boo) # prints 5 because boo doesn't get passed\r\n print(kwargs)\r\n # do custom calculations and set some custom config values here\r\n self.moo = moo\r\n self.poo = poo\r\n super().__init__(**kwargs)",
"this is saving the custom `moo`, `boo` to the returned config, \r\n\r\n`config = MyClassificationConfig.from_pretrained(\"hf-internal-testing/config-no-model\",boo=\"hoo\")`\r\n\r\nbut I still can't access the kwarg value I set inside the `__init__` to do some calculations. \r\n~~Is there a post config init function I can override?~~ I guess I can do it at call.",
"Also I'm a bit confused about the following behaviour (example):\r\n\r\nEven though the DinatConfig class doesn't take as a param `image_size`\r\nand thus doesn't do `self.image_size=image_size` anywhere if I pass \r\n`config=DinatConfig(...,image_size=128,...)` this (imagined) image_size param becomes part of the config(because super() assigns them)\r\n\r\n`DinatConfig(image_size=5).image_size`\r\n\r\nwhich can be good if you want to save some stuff along in your config \r\nbut isn't it also prone to confusion say that I confuse config key names?\r\nAlso if I can just pass what ever kwarg and it becomes part of config then do I really need to extend the PretrainedConfig class if I'm just assigning passed kwargs to self 🤔 I guess just to assign default values.",
"Sorry you actually don't need to set them as attributes, you just need to pass them along to the super method:\r\n```\r\nclass MyClassificationConfig(PretrainedConfig):\r\n def __init__(self, \r\n moo='poo',\r\n boo=5,\r\n **kwargs):\r\n print(boo) # prints 5 because boo doesn't get passed\r\n print(kwargs)\r\n # do custom calculations and set some custom config values here\r\n super().__init__(moo=moo, boo=boo, **kwargs)\r\n```\r\nwill have the right value set in the config.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.0.dev20220925 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I want to load existing model's configs with `.from_pretrained` but also want to pass my own kwargs `moo` & `boo`.
I'm extending `PretrainedConfig` like below:
```python
from transformers import PretrainedConfig
class MyClassificationConfig(PretrainedConfig):
def __init__(self,
moo='poo',
boo=5,
**kwargs):
print(boo) # prints 5 because boo doesn't get passed
print(kwargs)
# do custom calculations and set some custom config values here
super().__init__(**kwargs)
MyClassificationConfig.from_pretrained('google/canine-s',# example model, any other is same
id2label={1:"g",2:"b"},
moo="poo",
boo="hoo")
```
only predefined `id2label,label2id,num_classes` values get updated in the config. Happens [here](https://github.com/huggingface/transformers/blob/78a93d17c0e0bca0bc4477e0ee362a95d79f9b22/src/transformers/configuration_utils.py#L702).
the custom `moo` and `poo` param doesn't get passed to `MyClassificationConfig`. Because kwargs don't get passed [here](https://github.com/huggingface/transformers/blob/78a93d17c0e0bca0bc4477e0ee362a95d79f9b22/src/transformers/configuration_utils.py#L696)
Results in `moo` & `boo` argument values not changing from the default.
Since kwargs are optional can result in silent errors where you are actually using default values while thinking you are passing values!
I think this is a bug. But if it is intentional maybe nice to warn the user so there are no silent errors
### Expected behavior
Should be able to extend PretrainedConfig class allowing custom kwargs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21757/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21756
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21756/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21756/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21756/events
|
https://github.com/huggingface/transformers/pull/21756
| 1,596,620,505
|
PR_kwDOCUB6oc5KmADJ
| 21,756
|
[Examples] Generalise run audio classification for log-mel models
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,687
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Currently, `run_audio_classification.py` hard-codes the model input name to `input_values` in the pre-processing function. This makes it compatible with Wav2Vec2-style CTC models, but not other speech models that use log-mel `input_features` (e.g. Whisper or AST).
We adopt the same strategy that we use in `run_speech_recognition_seq2seq.py` and set this to the correct model input name (based on the feature extractor's attribute `.model_input_names`):
https://github.com/huggingface/transformers/blob/1d4b79785263077f9f09ddde5a75ae4f116e85d7/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py#L419
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21756/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21756",
"html_url": "https://github.com/huggingface/transformers/pull/21756",
"diff_url": "https://github.com/huggingface/transformers/pull/21756.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21756.patch",
"merged_at": 1677226747000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21755
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21755/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21755/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21755/events
|
https://github.com/huggingface/transformers/issues/21755
| 1,596,605,662
|
I_kwDOCUB6oc5fKkTe
| 21,755
|
Something confused me about Nystromformer
|
{
"login": "1649759610",
"id": 35913314,
"node_id": "MDQ6VXNlcjM1OTEzMzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/35913314?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1649759610",
"html_url": "https://github.com/1649759610",
"followers_url": "https://api.github.com/users/1649759610/followers",
"following_url": "https://api.github.com/users/1649759610/following{/other_user}",
"gists_url": "https://api.github.com/users/1649759610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1649759610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1649759610/subscriptions",
"organizations_url": "https://api.github.com/users/1649759610/orgs",
"repos_url": "https://api.github.com/users/1649759610/repos",
"events_url": "https://api.github.com/users/1649759610/events{/privacy}",
"received_events_url": "https://api.github.com/users/1649759610/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I'll ping @novice03 here as he's an expert on Nyströmformer ",
"Hello @1649759610, thank you for making this post. It looks like this is indeed an issue in the code that I might have overlooked. You are correct that `segment-means-seq-len` should be set to the length of the input. If I were to fix it, I would just remove the `segment-means-seq-len` parameter and set `self.seq_len` in the model to `config.max_position_embeddings`. I think a pull request would have to be made to make these changes. I am also guessing that the tests need to be changed accordingly. \r\n\r\nHowever, regarding the checkpoints, they were trained with Nystrom attention and not O(n^2) attention. This is just an issue in the HuggingFace implementation. So, they need not be re-trained. ",
"@1649759610 Would you like to make a PR about this?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,682
| 1,682
|
NONE
| null |
### System Info
Any GPU Machine with Transformer 4.26.0
### Who can help?
@ArthurZucker @younesbelkada @sgugger @novice03
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The parameter `segment-means-seq-len` in Nystromformer config is set to be 64, and is equal to another parameter num_landmarks (64). [refer to code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/nystromformer/configuration_nystromformer.py#L106)
But if they are equal, the Nystromformer will perform a O(n2) attention like bert, not the nystrom-attention proposed in original paper: https://arxiv.org/abs/2102.03902. [refer to code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/nystromformer/configuration_nystromformer.py#L106)
Through experimentation and anslysis, I think the parameter `segment-means-seq-len` should be the length of the tokenized input sequence. It should not be set to 64, if you set to be 64,It means you wanna use O(n2) attention,not nystrom attention.
So, there is a problem with the code, or is my understanding wrong? Addtionally, whether the model weight [w-madison/nystromformer-5](https://github.com/huggingface/transformers/blob/main/src/transformers/models/nystromformer/configuration_nystromformer.py#L24) is trained with O(n2)? if yes, whether the modlel weight will not run with nystrom-attention, it need to be pretrain with nystrom-attention?
### Expected behavior
The parameter `segment-means-seq-len` is set to be the real tokenized sequence length, so the nystrom-attention can be used to train or inference.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21755/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21754
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21754/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21754/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21754/events
|
https://github.com/huggingface/transformers/pull/21754
| 1,596,587,459
|
PR_kwDOCUB6oc5Kl42A
| 21,754
|
[Whisper] Add model for audio classification
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(Failing test is unrelated)"
] | 1,677
| 1,687
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds `WhisperForAudioClassification`: the Whisper encoder model with a sequence classification head on top for audio classification tasks.
With the changes implemented in #21756, Whisper can be fine-tuned for any audio classification task.
Results of fine-tuning for suggest that this is an extremely promising approach for audio classification. On the FLEURS language identification task, fine-tuning Whisper medium achieves an accuracy of **88%**, beating previous SoTA by 10% and the zero-shot Whisper model by 24% absolute:
<img width="614" alt="Screenshot 2023-02-28 at 09 37 27" src="https://user-images.githubusercontent.com/93869735/223370179-338e1f12-7793-48e5-9715-66d7f9da4af3.png">
See logs at [sanchit-gandhi/whisper-medium-fleurs-lang-id](https://huggingface.co/sanchit-gandhi/whisper-medium-fleurs-lang-id) for details.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21754/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21754/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21754",
"html_url": "https://github.com/huggingface/transformers/pull/21754",
"diff_url": "https://github.com/huggingface/transformers/pull/21754.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21754.patch",
"merged_at": 1678202421000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21752
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21752/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21752/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21752/events
|
https://github.com/huggingface/transformers/pull/21752
| 1,596,418,352
|
PR_kwDOCUB6oc5KlUQC
| 21,752
|
Different behavior in DistilBERT when using "inputs_embeds"
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @VictorSanh in case there was a specific reason behind this? ",
"changes look good to me, thanks Arthur!\r\n\r\nno specific reason behind it. my vague recollection is that `input_embeds` didn't exist when i first implemented it, so i would wait for core maintainer (or whatever the process is for transformers) to validate!"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
Fixes #21089
# What does this PR do?
The behavior of the DistillBert model is wrong when the input embeddings are provided, and does not align with Bert or other models.
This is due to the `Embeddings` layer that computes internally the addition of the positional embedding and word embedding. If the input embeddings are passed, they are not additioned with the positional embedding internaly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21752/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21752",
"html_url": "https://github.com/huggingface/transformers/pull/21752",
"diff_url": "https://github.com/huggingface/transformers/pull/21752.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21752.patch",
"merged_at": 1677228488000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21751
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21751/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21751/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21751/events
|
https://github.com/huggingface/transformers/pull/21751
| 1,596,382,891
|
PR_kwDOCUB6oc5KlMrt
| 21,751
|
add BioGptForSequenceClassification
|
{
"login": "sadaqabdo",
"id": 73716075,
"node_id": "MDQ6VXNlcjczNzE2MDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/73716075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadaqabdo",
"html_url": "https://github.com/sadaqabdo",
"followers_url": "https://api.github.com/users/sadaqabdo/followers",
"following_url": "https://api.github.com/users/sadaqabdo/following{/other_user}",
"gists_url": "https://api.github.com/users/sadaqabdo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadaqabdo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadaqabdo/subscriptions",
"organizations_url": "https://api.github.com/users/sadaqabdo/orgs",
"repos_url": "https://api.github.com/users/sadaqabdo/repos",
"events_url": "https://api.github.com/users/sadaqabdo/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadaqabdo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,677
| 1,677
| 1,677
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/21530
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@NielsRogge @GuillemGSubies
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21751/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21751",
"html_url": "https://github.com/huggingface/transformers/pull/21751",
"diff_url": "https://github.com/huggingface/transformers/pull/21751.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21751.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21750
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21750/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21750/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21750/events
|
https://github.com/huggingface/transformers/pull/21750
| 1,596,296,889
|
PR_kwDOCUB6oc5Kk6FW
| 21,750
|
typos in french documentation
|
{
"login": "tpaviot",
"id": 660130,
"node_id": "MDQ6VXNlcjY2MDEzMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/660130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tpaviot",
"html_url": "https://github.com/tpaviot",
"followers_url": "https://api.github.com/users/tpaviot/followers",
"following_url": "https://api.github.com/users/tpaviot/following{/other_user}",
"gists_url": "https://api.github.com/users/tpaviot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tpaviot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tpaviot/subscriptions",
"organizations_url": "https://api.github.com/users/tpaviot/orgs",
"repos_url": "https://api.github.com/users/tpaviot/repos",
"events_url": "https://api.github.com/users/tpaviot/events{/privacy}",
"received_events_url": "https://api.github.com/users/tpaviot/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix a few typos in the french documentation.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21750/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21750",
"html_url": "https://github.com/huggingface/transformers/pull/21750",
"diff_url": "https://github.com/huggingface/transformers/pull/21750.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21750.patch",
"merged_at": 1677140221000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21749
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21749/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21749/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21749/events
|
https://github.com/huggingface/transformers/pull/21749
| 1,596,286,734
|
PR_kwDOCUB6oc5Kk373
| 21,749
|
[WIP] A potential fix for `AutoConfig` and `AutoModel`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Close as this is not an issue."
] | 1,677
| 1,679
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
⚠️ This is just to demonstrate the issue and a (maybe super wrong) way to fix it:
- Maybe the usage of such models is very low - not worth the time
- I can't find `DecisionTransformerGPT2Model` on the Hub.
- `nllb` is not a real issue, as it contains only tokniezer, which `AutoTokenizer` works well.
- Should we instead creating new configuration classes and associate them to some `model_type` shown below?
- The current fix is kind hacky, and it also prevents obtaining the complete list in the different `XXX_MODEL_MAPPINGS`.
## Issue
Some keys in `MODEL_MAPPING_NAMES` are not in `CONFIG_MAPPING_NAMES`. Here are 2 examples
- `decision_transformer_gpt2` (associate to model class `DecisionTransformerGPT2Model`)
- `nllb` (associate to model class `M2M100Model`)
When we have a configuration with these `model_type`:
- saving the configuration won't save the specified `model_type`
- so loading will load the **wrong** (in some sense) `model_type` in the configuration, and therefore using the wrong model class to load the model checkpoint (in some case).
The following code snippet shows the problems
### Code snippet
#### This shows the model type `decision_transformer_gpt2` is not saved
```python
import os
import json
import tempfile
from transformers import DecisionTransformerConfig, AutoConfig
config = DecisionTransformerConfig()
# originally being `decision_transformer`
print(config.model_type)
config.model_type = "decision_transformer_gpt2"
# become `decision_transformer_gpt2`
print(config.model_type)
with tempfile.TemporaryDirectory() as tmpdir:
config.save_pretrained(tmpdir)
# check what is saved
with open(os.path.join(tmpdir, "config.json")) as fp:
config_dict = json.load(fp)
# this should be `"decision_transformer_gpt2"`, but we get `decision_transformer`
print(config_dict["model_type"])
auto_config = AutoConfig.from_pretrained(tmpdir)
# this should be `"decision_transformer_gpt2"`, but we get `decision_transformer`
print(auto_config.model_type)
assert auto_config.model_type == "decision_transformer_gpt2"
```
#### This shows `AutoModel` loads the model checkpoint with the wrong model class
```
import tempfile
from transformers import DecisionTransformerConfig, AutoModel, DecisionTransformerGPT2Model
config = DecisionTransformerConfig()
# originally being `decision_transformer`
print(config.model_type)
config.model_type = "decision_transformer_gpt2"
# become `decision_transformer_gpt2`
print(config.model_type)
# create a model with type `DecisionTransformerGPT2Model`
model = DecisionTransformerGPT2Model(config)
with tempfile.TemporaryDirectory() as tmpdir:
model.save_pretrained(tmpdir)
auto_model = AutoModel.from_pretrained(tmpdir)
# this should be `"decision_transformer_gpt2"`, but we get `decision_transformer`
print(auto_model.config.model_type)
# this should be `"DecisionTransformerGPT2Model"`, but we get `DecisionTransformerModel`
print(auto_model.__class__.__name__)
assert auto_model.__class__.__name__ == "DecisionTransformerGPT2Model"
```
Enhance `AutoConfig`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21749/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21749",
"html_url": "https://github.com/huggingface/transformers/pull/21749",
"diff_url": "https://github.com/huggingface/transformers/pull/21749.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21749.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21748
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21748/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21748/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21748/events
|
https://github.com/huggingface/transformers/issues/21748
| 1,596,102,956
|
I_kwDOCUB6oc5fIpks
| 21,748
|
"Emotion English DistilRoBERTa-base" - Inference API not loading model
|
{
"login": "mritik",
"id": 56925074,
"node_id": "MDQ6VXNlcjU2OTI1MDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/56925074?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mritik",
"html_url": "https://github.com/mritik",
"followers_url": "https://api.github.com/users/mritik/followers",
"following_url": "https://api.github.com/users/mritik/following{/other_user}",
"gists_url": "https://api.github.com/users/mritik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mritik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mritik/subscriptions",
"organizations_url": "https://api.github.com/users/mritik/orgs",
"repos_url": "https://api.github.com/users/mritik/repos",
"events_url": "https://api.github.com/users/mritik/events{/privacy}",
"received_events_url": "https://api.github.com/users/mritik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Narsil ",
"We've been having issues today. Thanks for reporting.\r\n\r\nThings should be slowly getting back to normal.",
"Closing as everything should be back online."
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
### System Info
The Inference API does not load this specific model: "Emotion English DistilRoBERTa-base". It keeps returning a 503 error and timing out (both when I try to make a request locally through my webapp and on the huggingface website). Other models seem to be loading fine. This is very strange because things were working fine for the past month. I am still well below my request and character limits too. Occasionally it does work properly but this is very rare.
<img width="515" alt="Screen Shot 2023-02-22 at 8 53 38 PM" src="https://user-images.githubusercontent.com/56925074/220805577-bb4eb002-68c0-41b9-9392-e0d43f2dbe44.png">
### Who can help?
@ArthurZucker
@sgugger
@Narsil
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
async function query(data) {
const response = await fetch(
"https://api-inference.huggingface.co/models/j-hartmann/emotion-english-distilroberta-base",
{
headers: { Authorization: "Bearer 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" },
method: "POST",
body: JSON.stringify(data),
}
);
const result = await response.json();
return result;
}
### Expected behavior
For the model to load and return my results. I modify and log the Json result elsewhere in my code but that is not relevant to this problem so I didn't provide it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21748/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21747
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21747/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21747/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21747/events
|
https://github.com/huggingface/transformers/issues/21747
| 1,595,909,278
|
I_kwDOCUB6oc5fH6Se
| 21,747
|
Slow decoding with many special tokens in vocabulary
|
{
"login": "samsontmr",
"id": 15007950,
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samsontmr",
"html_url": "https://github.com/samsontmr",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Slow tokenizers are... slow. That's why we wrote the tokenizers library ;-) Why not use `T5TokenzierFast` which doesn't have the same problem?",
"`T5TokenzierFast` does not have byte-fallback + why artificially handicap the slow tokenizer if it could be more efficient (using sets instead of lists and computing the attribute only when it's updated)?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
NONE
| null |
### System Info
present across multiple versions
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import T5Tokenizer
from time import time
from random import randint
t1 = T5Tokenizer.from_pretrained('t5-base')
t2 = T5Tokenizer.from_pretrained('t5-base', extra_ids=2000)
to_decode = [randint(0, 32000) for i in range(10000)]
start = time()
t1.decode(to_decode)
print("few special tokens:", time() - start)
start = time()
t2.decode(to_decode)
print("many special tokens:", time() - start)
```
### Expected behavior
The slowdown should not be so drastic. The cause is an inefficient implementation of [`all_special_ids`](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py#L1293) and [`all_special_tokens`](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py#L1267). Additionally, generating them on the fly incurs a large overhead since this attribute is queried for every id to be decoded ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils.py#L907) and [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/tokenization_t5.py#L324)).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21747/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21746
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21746/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21746/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21746/events
|
https://github.com/huggingface/transformers/issues/21746
| 1,595,851,598
|
I_kwDOCUB6oc5fHsNO
| 21,746
|
Regarding Ragend2endRetriever
|
{
"login": "Rajdoshi99",
"id": 44093439,
"node_id": "MDQ6VXNlcjQ0MDkzNDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/44093439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rajdoshi99",
"html_url": "https://github.com/Rajdoshi99",
"followers_url": "https://api.github.com/users/Rajdoshi99/followers",
"following_url": "https://api.github.com/users/Rajdoshi99/following{/other_user}",
"gists_url": "https://api.github.com/users/Rajdoshi99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rajdoshi99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rajdoshi99/subscriptions",
"organizations_url": "https://api.github.com/users/Rajdoshi99/orgs",
"repos_url": "https://api.github.com/users/Rajdoshi99/repos",
"events_url": "https://api.github.com/users/Rajdoshi99/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rajdoshi99/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,677
| 1,678
| 1,678
|
NONE
| null |
Unable to Change the Generator to T5ForConditionalGeneration.
Shows error.raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'T5ForConditionalGeneration' object has no attribute 'rag'
--model_name_path t5-base
--model_type t5
@shamanez
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21746/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21745
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21745/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21745/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21745/events
|
https://github.com/huggingface/transformers/pull/21745
| 1,595,735,452
|
PR_kwDOCUB6oc5KjBHP
| 21,745
|
Tokengt for Graph Classification
|
{
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21745). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Oops. Don't mark stale. Hold on ... ",
"Hi @Raman-Kumar ! Do you need a hand on this PR? :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi, @clefourrier, soon I will **must** complete \r\n(I was focusing on my new full-time job recently)",
"Congrats on your new job! I'll have some time off soon so I'll be less responsive, but feel free to ping me nonetheless!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"In July Month, I work on this. I have more holiday on that month ",
"Hi @clefourrier and @Raman-Kumar,\r\nI'm fairly new to ML. I have a pretrained Graphormer model that I'd like to use for inference on individual instances. Looking around I arrived here, yet it's still not clear to me how I can proceed. Would you kindly provide some guidance?\r\n\r\nMany thanks!",
"Hi @sarah-af ,\r\nPlease open a dedicated issue and tag me on it :)",
"Thank you for the quick response. I did read the documentation and the\r\nblog. For some reason, I'm unable to do a forward pass on the test split\r\nthat has already been collated and preprocessed. Perhaps I missed\r\nsomething. I'll check again, and I'll open a dedicated issue if I can't\r\nfigure it out.\r\nCheers!\r\n\r\nOn Tue, Sep 12, 2023 at 6:20 PM Clémentine Fourrier <\r\n***@***.***> wrote:\r\n\r\n> Hi @sarah-af <https://github.com/sarah-af> ,\r\n> Did you read the Graphormer page documentation here\r\n> <https://huggingface.co/docs/transformers/model_doc/graphormer> and the\r\n> Graph ML classification blog post\r\n> <https://huggingface.co/blog/graphml-classification>?\r\n> If yes, what is not clear?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/pull/21745#issuecomment-1716038357>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ARYPENHMTVHAMDB3A7UD55LX2CDWTANCNFSM6AAAAAAVEYFRBU>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | 1,677
| 1,694
| 1,686
|
MEMBER
| null |
# What does this PR do?
replaces #21098
Adds the TokenGT model for graph classification in Transformers.
Done:
- [x] Architecture ported
- [x] Collator (the model has no tokenizer) and preprocessing
Todo:
- [ ] Test results against original implementation, to make sure they are within precision range.
- [ ] Add checkpoints and make sure they load properly
- [ ] Update doc
- [ ] Update test suite
- [ ] Add model card for the checkpoints once added
## Dependencies
Cython, like Graphormer, and einops.
Linked to #21079
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (Discussed on Slack)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Not tagging anyone for now as this is a draft.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21745/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21745",
"html_url": "https://github.com/huggingface/transformers/pull/21745",
"diff_url": "https://github.com/huggingface/transformers/pull/21745.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21745.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21744
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21744/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21744/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21744/events
|
https://github.com/huggingface/transformers/pull/21744
| 1,595,667,890
|
PR_kwDOCUB6oc5KiyTj
| 21,744
|
Update doctest GH workflow file
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Just as in other workflow files, this PR adds a step in doctest workflow file
```
- name: Show installed libraries and their versions
run: pip freeze
```
so we can quicly access this information whenever necessary, so faster debugging.
It took me some time to find the fix shown in #21742
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21744/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21744",
"html_url": "https://github.com/huggingface/transformers/pull/21744",
"diff_url": "https://github.com/huggingface/transformers/pull/21744.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21744.patch",
"merged_at": 1677141653000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21743
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21743/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21743/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21743/events
|
https://github.com/huggingface/transformers/pull/21743
| 1,595,395,514
|
PR_kwDOCUB6oc5Kh3gH
| 21,743
|
[`tests`] add `accelerate` marker
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @younesbelkada ! I understand the advantage of adding this. However, I would recommend to follow what we have like `is_pt_tf_cross_test` as in\r\nhttps://github.com/huggingface/transformers/blob/619d51e01f326d298687912d1f65f8d460f2a6e2/src/transformers/testing_utils.py#L150\r\nAlong this approach, you also have to check\r\n\r\nhttps://github.com/huggingface/transformers/blob/619d51e01f326d298687912d1f65f8d460f2a6e2/src/transformers/testing_utils.py#L142\r\nand\r\n(I am not familiar with this part hoever)\r\nhttps://github.com/huggingface/transformers/blob/619d51e01f326d298687912d1f65f8d460f2a6e2/conftest.py#L34\r\n\r\nThis is a suggestion based my intuition - I never did this before in `transformers`. You can wait a bit to hear what @sgugger says.\r\n",
"Thanks a lot for sharing that! \r\nI agree for consistency it would make sense to do this, however this solution seems IMO much simpler, given the number of affected tests (usually for each model there are only 3 `accelerate` tests), let's see what @sgugger will say!",
"> Thanks a lot for sharing that! I agree for consistency it would make sense to do this, however this solution seems IMO much simpler, given the number of affected tests (usually for each model there are only 3 `accelerate` tests)\r\n\r\nMy response here is not to convince @younesbelkada the suggested approach, but just want to point out the above argument doesn't seem very valid: if you look `tests/test_modeling_common.py`, there is only one test decorated with `is_pt_tf_cross_test`, and yet we still use that approach.",
"The difference with the PT/TF cross tests is that we want to skip them unless a special env var is set (to not run them in the torch job and tf job for instance). This is not necessary here since as we don't want to prevent those tests from running in the regular jobs, this is just for convenience.",
"The failing test seems to be unrelated to this PR, merging!"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR introduces a nice utils for users and contributors (such as myself) that want to run just `accelerate` specific tests. Thanks to @fxmarty that has introduced to me this util
By using `pytest.mark` you can run `accelerate`-specific tests easily. With this PR, if let's say I want to run `accelerate` tests on OPT I can just do:
```bash
RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py
```
cc @ydshieh @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21743/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21743",
"html_url": "https://github.com/huggingface/transformers/pull/21743",
"diff_url": "https://github.com/huggingface/transformers/pull/21743.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21743.patch",
"merged_at": 1677497614000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21742
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21742/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21742/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21742/events
|
https://github.com/huggingface/transformers/pull/21742
| 1,595,357,713
|
PR_kwDOCUB6oc5Khvk7
| 21,742
|
Fix 2 quicktour file doctest
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Update expect output values due to some update of package or Hub repo file.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21742/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21742",
"html_url": "https://github.com/huggingface/transformers/pull/21742",
"diff_url": "https://github.com/huggingface/transformers/pull/21742.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21742.patch",
"merged_at": 1677141689000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21741
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21741/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21741/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21741/events
|
https://github.com/huggingface/transformers/pull/21741
| 1,595,260,351
|
PR_kwDOCUB6oc5KhaoN
| 21,741
|
Add ALIGN to transformers
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for adding this model! Most comments are just formatting nits.\r\n> \r\n> Two main comments to address:\r\n> \r\n> * Can you modify the model prefix to Align, rather than ALIGN?\r\n> * All of the vision encoder architecture is, as far as I can tell, exactly the same as the EfficientNet model. And the text encoder is directly imported from Bert. Is there a reason for reimplementing EfficientNet but importing Bert?\r\n\r\nThank you! \r\n\r\nI can modify the model prefix to Align but then I will have to change README titles to Align in order to not get repo-consistency errors and I'd like to keep the model name in the documentation the same as the original work. Could we keep it as it is, similar to BERT and CLIP? cc @sgugger \r\n\r\nThe Kakao Brain EfficientNet implementation is slightly different from the official one (final layers), hence I'm copy pasting the modules and changing the final layers.",
"> The Kakao Brain EfficientNet implementation is slightly different from the official one (final layers), hence I'm copy pasting the modules and changing the final layers.\r\n\r\nOK, I understand better now. I realised there's something I overlooked in my initial review: why are we using a randomly initialized Bert model as the text encoder? My understanding of this model is a text and vision backbone are loaded in and then trained to align their embeddings i.e. the weights we should be loading are the respective BERT and EfficientNet weights post-training and the api would be something like `AlignModel.from_pretrained(google/align-efficientnetv2-bert-large')`. Is this correct? \r\n\r\nI agree that it'd make more sense to initialize BERT and EfficientNet from pretrained checkpoints but here is the experiment setup described in the paper:\r\n> We train our ALIGN models from scratch, using the open sourced implementation of EfficientNet as the image encoder and BERT as the text encoder.\r\n\r\nThe Kakao Brain implementation also trains models from scratch, should I keep it in line with the paper or use the respective repos? Both are fine with me",
"\r\n> I can modify the model prefix to Align but then I will have to change README titles to Align in order to not get repo-consistency errors and I'd like to keep the model name in the documentation the same as the original work. Could we keep it as it is, similar to BERT and CLIP? cc @sgugger\r\n> \r\nMy mistake, I figured out how to keep the documentation title camel cased, disregard this please.\r\n\r\n",
"Ah, OK. Sorry for the confusion. Just to make sure I understand: Google haven't released their weights; kakao Brain have themselves trained the ALIGN model and these are the weights we're using. Is this right?",
"> Ah, OK. Sorry for the confusion. Just to make sure I understand: Google haven't released their weights; kakao Brain have themselves trained the ALIGN model and these are the weights we're using. Is this right?\r\n\r\nNo problem at all, Kakao Brain implemented and trained the ALIGN model themselves, Google haven't released checkpoints nor the code.",
"I think all comments are addressed, pinging @amyeroberts and @sgugger for the final review"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds [ALIGN](https://arxiv.org/abs/2102.05918) to transformers, a multi-modal model similar to CLIP. ALIGN uses EfficientNet as its vision encoder and BERT as its text encoder. No public implementation is available for this model, the code is adapted from Kakao Brain's tensorflow implementation shared with us.
To do:
- [x] Upload converted model
- [x] Create a model card
- [x] Fix processor tests
- [x] Fix model tests
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21741/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21741",
"html_url": "https://github.com/huggingface/transformers/pull/21741",
"diff_url": "https://github.com/huggingface/transformers/pull/21741.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21741.patch",
"merged_at": 1677695011000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21740
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21740/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21740/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21740/events
|
https://github.com/huggingface/transformers/pull/21740
| 1,595,116,307
|
PR_kwDOCUB6oc5Kg73l
| 21,740
|
[examples/summarization] deal with `max_length` and `num_beams`
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sgugger, just removed the `max_length`!\r\n\r\nI also added another modif which separates the preprocessing of the training and the validation set in order to truncate the outputs by `max_target_length` and `val_max_target_length` respectively"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hello 👋,
1. In the examples/pytorch/summarization/run_summarization.py, when the `generation_max_length` is not defined, the parameter `max_length` for `generate` function will be set to `val_max_target_length` in the line [675](https://github.com/huggingface/transformers/blob/82e61f34451dbea2de8d2220d51b0609d605dfd7/examples/pytorch/summarization/run_summarization.py#L675-L679), and be used for the **final evaluation** and prediction after training. However, the `max_length` used for the **evaluation during the training** will be possibly set to `None` or the `max_length` defined in the model's config. It's not a consistent behavior. Here I tried to unify this parameter before the `Seq2SeqTrainer` is initialized. Also applied the same to `num_beams`.
2. In the examples/pytorch/summarization/run_summarization_no_trainer.py, fixed the parameter `max_length`. Think it has higher priority than `val_max_target_length`.
Please ignore this PR if it's intended behavior :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc: @sgugger @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21740/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21740",
"html_url": "https://github.com/huggingface/transformers/pull/21740",
"diff_url": "https://github.com/huggingface/transformers/pull/21740.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21740.patch",
"merged_at": 1677482295000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21739
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21739/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21739/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21739/events
|
https://github.com/huggingface/transformers/issues/21739
| 1,595,114,079
|
I_kwDOCUB6oc5fE4Jf
| 21,739
|
Pegasus Tokenizer is throwing away newline tokens
|
{
"login": "njbrake",
"id": 33383515,
"node_id": "MDQ6VXNlcjMzMzgzNTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/33383515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njbrake",
"html_url": "https://github.com/njbrake",
"followers_url": "https://api.github.com/users/njbrake/followers",
"following_url": "https://api.github.com/users/njbrake/following{/other_user}",
"gists_url": "https://api.github.com/users/njbrake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/njbrake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njbrake/subscriptions",
"organizations_url": "https://api.github.com/users/njbrake/orgs",
"repos_url": "https://api.github.com/users/njbrake/repos",
"events_url": "https://api.github.com/users/njbrake/events{/privacy}",
"received_events_url": "https://api.github.com/users/njbrake/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Even more concise:\r\n```\r\n>>> tokenizer.encode('\\n')\r\n[1]\r\n>>> tokenizer.decode(1)\r\n'</s>'\r\n```",
"Hey! Thanks for posting this. \r\nThe reason why the new lines are automatically removed is because this is the default behavior for `Pegasus` see [here](https://github.com/google-research/pegasus/blob/939830367bcf411193d2b5eca2f2f90f3f9260ca/pegasus/ops/sp_text_encoder.cc#L79), where they have a `preserve_new_line` variable. I don't really know why we don't, but you can add this token as part of the `added_vocab` in order to make sure that it is tready as a special token and not removed. That is the quickest fix .\r\n\r\nOtherwise, I can open a PR to add this argument (as it was in the original code I guess it makes sense?). I also need to check what it the common practice for that, cc @Narsil if you have an idea",
"The `\\n` characters get removed by the normalization of this model.\r\n\r\nIt's both within the `setnencepiece.model` file and the equivalent fast tokenizer (PrecompiledCharMap).\r\nThe only way to get those back would be to modify the normalizer. But since the model vocab doesn't contain any such tokens, you're going to end up with only `unk` everywhere there's a newline.\r\n\r\nIs it possible for you to use `return_offsets_mapping` to get the offsets and see where those missing values are ?\r\n\r\n```python\r\nencoded = tokenizer(\"This \\n is\", return_offsets_mapping=True)\r\nencoded[\"offset_mapping\"]\r\n# [(0, 4), (7, 9), (0, 0)]\r\n```\r\nThose show the \"hole\" within the original string. This can account for newlines as well as other regularized content. It's the only generally working approach for these kind of things.\r\n\r\nSince there is no newline in the vocab, the model will never be able to output back any new lines so I'm not sure adding an option will be of any help here @ArthurZucker \r\n\r\nThose offsets show there's",
"Thanks for the feedback, @Narsil and @ArthurZucker! Based on your feedback I did this:\r\n```\r\n>>> tokenizer = transformers.AutoTokenizer.from_pretrained(\"google/pegasus-x-base\")\r\n>>> token = tokenizers.AddedToken(content=\"\\n\", normalized=False)\r\n>>> tokenizer.add_tokens(list([token])\r\n>>> sample = \"I am a section \\n \\n Now I should be a few lines below!\"\r\n>>> inputs = tokenizer.encode(sample, return_tensors=\"pt\")\r\n>>> inputs\r\ntensor([[ 125, 346, 114, 1201, 96103, 96103, 1032, 125, 246, 129,\r\n 114, 324, 1569, 487, 147, 1]])\r\n>>> out = tokenizer.decode(inputs[0])\r\n>>> out\r\n'I am a section\\n\\n Now I should be a few lines below!</s>'\r\n```\r\nSo it appears that everything is ok now. Is there something wrong with my approach, since it's not exactly what you recommended?",
"It is exactly what I was recommending 👍🏻 also you can now push the tokenizer to the hub and that should be it! \r\nIf you do not want the model to strip right and left for this token, you can also control that. Glad this solved your issue! ",
"Can't we just replace all the new line characters `\\n` with `<n>`, which I believe is its equivalent in Pegasus?"
] | 1,677
| 1,702
| 1,677
|
CONTRIBUTOR
| null |
### System Info
it's important for my model to learn where the newlines should be placed in the output, and from my understanding, this information is being removed by the Pegasus tokenizer (applicable to the latest version of the Transformers/Tokenizer library):
For example, if my target output is
```
SECTION HEADING \n\nHere is the output for this section, cool!
```
If I encode and decode through the tokenizer, it becomes
```
SECTION HEADING Here is the output for this section, cool!
So I guess my question would be
```
Am I missing something, and is there some toggle I can enable that would allow for the tokenizer to preserve new lines?
If there is not a toggle, is there a reason that one shouldn't be added?
Of course I have the option of pre-processing my text to convert new lines to `<n>` and then post-processing to turn the `<n>` back to `\n`, but seems a little hacky for my liking 😅
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
tokenizer = transformers.AutoTokenizer.from_pretrained("google/pegasus-x-base")
sample = "I am a section \n \n Now I should be a few lines below!"
inputs = tokenizer.encode(sample, return_tensors="pt")
out = tokenizer.decode(inputs[0])
```
Out is
```
'I am a section Now I should be a few lines below!</s>'
```
So it is stripping out the newline characters
### Expected behavior
It should not strip out the newline characters, or I should have the option to tell the tokenizer not to remove newlines (This functionality may already exist and I'm just unaware of it)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21739/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21739/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21738
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21738/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21738/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21738/events
|
https://github.com/huggingface/transformers/pull/21738
| 1,595,079,205
|
PR_kwDOCUB6oc5Kgz7K
| 21,738
|
Generate: Fix GIT batched captioning
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,684
| 1,677
|
MEMBER
| null |
# What does this PR do?
Fixes #21714 (batched image captioning with GIT not working)
The problem, at a higher level, boils down to a previously incomplete `batch_size` inference when no `input_ids` nor `inputs_embeds` was being passed to `.generate()` in decoder-only models -- we were always assuming `batch_size=1`, which is not correct in some multimodal models like GIT.
If the `.generate()` doesn't receive `input_ids`, then some input tensor must live in `model_kwargs`. Now, we look for tensors in `model_kwargs` and use them as a source of information to determine the `batch_size`, which is then used to initialize `input_ids` with the correct shape.
👉 changes also made on the TF side
👉 a test was added to ensure we don't regress
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21738/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21738",
"html_url": "https://github.com/huggingface/transformers/pull/21738",
"diff_url": "https://github.com/huggingface/transformers/pull/21738.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21738.patch",
"merged_at": 1677145838000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21737
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21737/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21737/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21737/events
|
https://github.com/huggingface/transformers/issues/21737
| 1,595,044,311
|
I_kwDOCUB6oc5fEnHX
| 21,737
|
[`Generate`] Fix `gradient_checkpointing` and `use_cache` bug for generate-compatible models
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
},
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@younesbelkada I am a little confused on where the list for generate-compatible models is located. I'd like to pick up this issue if I can find it!",
"Hello @mollerup23 \r\nThanks for your interest, we will update the list with @gante once #21733 gets merged !",
"@younesbelkada Looks like it will be essentially the same fix across the other models too. Do you want me to pull that fix into a utility function once merged?\r\nJust for illustration, something like - \r\n```py\r\nuse_cache = should_use_cache(logger, use_cache, self.gradient_checkpointing, self.training)\r\npresents = () if use_cache else None\r\n```\r\nand likely in modeling_utils.py -\r\n```py\r\ndef should_use_cache(logger: Logger, use_cache: bool, gradient_checkpointing: bool, training: bool) -> bool:\r\n if use_cache:\r\n if gradient_checkpointing and training:\r\n logger.warning(\r\n \"`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...\"\r\n )\r\n else:\r\n return True\r\n return False\r\n```\r\n\r\nWas looking into making the fix and realized there would be some repetition so thought I'd ask",
"Hey @connor-henderson 👋 Thank you for the suggestion! Usually, I'd give the green light to configuration-related DRY approaches such as the one you suggested. However, this one would sit right in `forward()`, and we prioritize clear code (= avoid abstractions) in the modeling code itself.\r\n\r\nIn case you're curious about this position, we have a blog post about why we do it [here](https://huggingface.co/blog/transformers-design-philosophy) 🤗 ",
"@mollerup23 the list and the instructions are updated, in case you're interested in contributing :D ",
"Would like to take GPT-2!",
"I want to work on GPT-J!",
"I would like to work on Blenderbot ",
"Happy to take on Git, GptNeoX, ImageGPT, LED, LongT5, M2M100, Marian, MBart, MegratronBert, MVP, OPT, Pegasus, PegasusX, RemBert, RoFormer",
"Thanks a mile @KMFODA ! 💯 \r\nFeel free to take those, and tag me or @gante whenever you feel ready!",
"Hi, I am a newbie to open source and would like to contribute. @younesbelkada can I contribute to this issue?\r\n\r\n",
"Hey @saswatmeher \r\nOf course yes!!\r\nYou can pick up a model that has not been taken yet, for example `BioGpt` and do the following:\r\n\r\n1- Fork this repository\r\n2- Clone your fork locally and create a new branch `git checkout -b fix-bio-gpt-issue`\r\n3- Modify the file `src/transformers/models/biogpt/modeling_biogpt.py` the same way as all the contributors have modified their files in #21818 #21833 #21815 etc. (You can check `Files Changed` on the PR, on the right top of the Pull Request page)\r\n4- Apply these changes and push the changes on your branch\r\n5- Finally open a Pull Request between `fix-bio-gpt-issue` and `main` branch of `transformers` (+ tag us, myself + @gante ) and we should be good to go! \r\n\r\nLet us know if you have more questions!",
"I am happy to pick up other models too. Can I work on Bart, Bert, BigBird.",
"Hello, can I work on Bloom?",
"Hi @asrimanth , yes sure you can!",
"> @mollerup23 the list and the instructions are updated, in case you're interested in contributing :D\r\n\r\nGreat! I'd like to work on OPT",
"HI @gante working on \r\n Whisper\r\n XGLM\r\n XLMRobertaXL",
"@mollerup23 hey! OPT was claimed by @KMFODA a few comments above :) Still plenty of models up for grabs, though!",
"Hello 👋, I would like to contribute and work on T5. Let me know, Thanks!\r\n[PR](https://github.com/huggingface/transformers/pull/22036) for the suggested changes.",
"@younesbelkada Can I claim TimeSeriesTransformer?",
"hi @mollerup23\r\nOf course yes! Please feel free to take it!",
"Hey @krypticmouse! \r\nDo you need any help for making the fix on GPT-j? ",
"Hi @younesbelkada, Thanks for asking. My PR got merged long ago.",
"Thanks for the heads up, just updated the table, the only model left seems to be TimeSeries Transformer then, thank you all for the great contribution!",
"Hey @younesbelkada, may I work on the TimeSeries Transformer? ",
"@annahung31 I believe @mollerup23 is working on it :) @mollerup23, can you confirm?",
"yes @gante @annahung31 , the PR is here: https://github.com/huggingface/transformers/pull/22272"
] | 1,677
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
## Feature request
When using a model that uses `gradient_checkpointing` and if a user wants to call `generate` with `use_cache`, it leads some models to bugs, such as the one described in https://github.com/huggingface/transformers/pull/21733
The fix should be to slightly refactor some models following the same procedure as in the aforementioned PR
### How to participate
1. If it is your first time here, have a quick look at our [contribution guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) 🤗
2. Pick a model from the list below. Check in the comments here if it hasn't been claimed yet.
3. Claim your models in the comments (e.g. "I want to work on GPT2")
4. Replicate the changes of [this PR](https://github.com/huggingface/transformers/pull/21733) to your model of choice. In other words, move the `if` block to the line above the `... if use_cache else None`, in the same `.forward()` function. Please note that some models may have more than one instance of this block!
5. Make sure you've run our automated code formatting tool (i.e. run `make fixup` in your shell -- also run `make fix-copies` if it requests you to do so)
6. Open a PR. Tag @younesbelkada or @gante (one of us is enough)
That's it! With each change, you'll be making `transformers` a little bit better for all of us 💛
### Models to fix:
- [x] Bart | https://github.com/huggingface/transformers/pull/21866
- [x] Bert
- [x] BigBird | https://github.com/huggingface/transformers/pull/21882
- [x] BigBirdPegasus
- [x] BioGPT | https://github.com/huggingface/transformers/pull/21844
- [x] Blenderbot
- [x] BlenderbotSmall
- [x] BlipText
- [x] Bloom
- [x] CodeGen
- [x] Esm
- [x] Git | https://github.com/huggingface/transformers/pull/21818
- [x] GPT2 | https://github.com/huggingface/transformers/pull/21772
- [x] GptNeo | https://github.com/huggingface/transformers/pull/21733
- [x] GptNeoX | https://github.com/huggingface/transformers/pull/21815
- [x] GPT-J
- [x] ImageGPT | https://github.com/huggingface/transformers/pull/21816
- [x] LED | https://github.com/huggingface/transformers/pull/21840
- [x] LongT5
- [x] M2M100 | https://github.com/huggingface/transformers/pull/21841
- [x] Marian | https://github.com/huggingface/transformers/pull/21842
- [x] MBart | https://github.com/huggingface/transformers/pull/21918
- [x] MegratronBert | https://github.com/huggingface/transformers/pull/21921
- [x] MVP | https://github.com/huggingface/transformers/pull/21920
- [x] OPT
- [x] Pegasus
- [x] PegasusX
- [x] ProphetNet | https://github.com/huggingface/transformers/pull/21772
- [x] RemBert
- [x] RoFormer
- [x] Speech2Text
- [x] Speech2Text2
- [x] SpeechT5
- [x] SwitchTransformer
- [x] T5
- [x] TimeSeriesTransformer
- [x] TrajectoryTransformer
- [x] TrOCR
- [x] Whisper
- [x] XGLM
- [x] XLMRobertaXL
- [x] Xmod
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21737/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21736
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21736/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21736/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21736/events
|
https://github.com/huggingface/transformers/issues/21736
| 1,594,966,217
|
I_kwDOCUB6oc5fEUDJ
| 21,736
|
How to disable model parallelism and enable data parallelism when using Accelerate and `device_map='auto'`?
|
{
"login": "chenmingjiong",
"id": 44235429,
"node_id": "MDQ6VXNlcjQ0MjM1NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/44235429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenmingjiong",
"html_url": "https://github.com/chenmingjiong",
"followers_url": "https://api.github.com/users/chenmingjiong/followers",
"following_url": "https://api.github.com/users/chenmingjiong/following{/other_user}",
"gists_url": "https://api.github.com/users/chenmingjiong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenmingjiong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenmingjiong/subscriptions",
"organizations_url": "https://api.github.com/users/chenmingjiong/orgs",
"repos_url": "https://api.github.com/users/chenmingjiong/repos",
"events_url": "https://api.github.com/users/chenmingjiong/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenmingjiong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello @chenmingjiongjiong \r\nWhat is the VRAM of your GPU?\r\ncan you alternatively try `device_map={'':torch.cuda.current_device()}`?",
"> can you alternatively try `device_map={'':torch.cuda.current_device()}`\r\n\r\nThis solved my problem. Thanks!\r\n\r\nThen I got another error about bitsandbytes, I have submitted an issue in their [repo](https://github.com/TimDettmers/bitsandbytes/issues/162). ",
"> \r\n\r\nWow, this is interesting! Could you explain why this trick works?",
"Sure @beyondguo \r\nPer my understanding, and if I got it right it should very simple. `device_map={\"\":0}` simply means \"try to fit the entire model on the device 0\" - device 0 in this case would be the GPU-0 \r\nIn a distributed setting `torch.cuda.current_device()` should return the current device the process is working on. If you have 4 GPUs and running DDP with 4 processes each process should be working on an independent GPU, meaning that if each process load a model with `device_map={\"\":i}` the process `i` will try to fit the entire model on the GPU `i`, this leads to properly having `n` working processes that have a replica of the model.\r\nI remember I had some issues while using `torch.cuda.current_device()` therefore now I advise users to use `accelerate` instead and retrieve the current process index with the following trick:\r\n```python\r\nfrom accelerate import Accelerator\r\n\r\ndummy_accelerator = Accelerator()\r\ncurrent_device = dummy_accelerator.process_index\r\n```\r\nLet me know if anything is unclear",
"Thanks @younesbelkada \r\nNow I'm using LoRA to tune a LLM (ChatGLM-6B) using 2 * A800 80G. I've got some findings that really confuse me.\r\n\r\nThe first problem:\r\n- Setting `device_map=\"auto”` to my understanding means setting model parallelization (MP), which will put the model layers into different devices. Thus, during training, only one GPU is calculating.\r\n- Setting `model.is_parallelizable=False` means I don't want to set MP.\r\n\r\nHowever, if I both set `device_map=\"auto”` and `model.is_parallelizable=False`, model parallelization is still activated. I think `model.is_parallelizable=False` should block the model parallelization.\r\n\r\nSecond problem:\r\n- Setting `device_map={'':torch.cuda.current_device()}`, it means the model is copied to both GPUs. \r\n- Setting device_map=\"auto\", I see the model to split into two parts:\r\n\r\nHowever, I found the latter method consumes nearly the save GPU memories per GPU as the first method. Why? I thought it should only consume half the memories per GPU compared with the first method.\r\n\r\n---\r\nOne more thing, using `device_map=\"auto\"`, the batch size is halved, compared with `device_map={'':torch.cuda.current_device()}`, however, it is even 1.5 x faster! Could you please explain why this happens? Many thanks!",
"Hi @beyondguo \r\nThanks for looping back\r\n1- Yes setting device_map = auto means that you want to set Model Parallelism, meaning putting the model into different GPU layers and one GPU at a time will be used \r\n2- I think in the latest versions of transformers this argument is not needed anymore\r\nRegarding the second problem I think this is expected, if you run things correctly if you have a copy of the model in 2 GPUs you will also have 2 copies of the optimizer states and the input data will be also split across both processes",
"Thanks for your detailed reply! @younesbelkada \r\n\r\nTo my understanding, when using `device_map=\"auto\"`, only a subset of all layers is allocated to one GPU, which should lead to **lower** GPU consumption. However, it consumes **nearly the same** GPU memories as setting `device_map={'':torch.cuda.current_device()}`.",
"I see, thanks for your reply! \r\nCan you provide more details (how many GBs allocated, which model, etc.?) Thanks!",
"Sure.\r\nModel: ChatGLM-6B\r\ndevice: 4 * A800-80G\r\n\r\n70 GBs allocated for each GPU.\r\n\r\nThe code I'm using is https://github.com/beyondguo/LLM-Tuning/blob/796384e837b3b6d70564d50ef5bb46f9175cb700/chatglm_lora_tuning.py#L87\r\n",
"Thanks for sharing those\r\n\r\n> Model: ChatGLM-6B\r\n\r\nI see the model is running in full precision, a 6B model would require 24GB VRAM just to be loaded on the GPU\r\n\r\n> 70 GBs allocated for each GPU. \r\n\r\nDo you run your script using `torch.distributed.run` or just `python yourscript.py`?",
"simply `python yourscript.py`, I'm using Trainer, which I think should automatically manage the GPU allocation.",
"I see better now, if you want to benefit from data parallelism as mentioned here: https://github.com/huggingface/transformers/issues/21736#issuecomment-1595699638 or in the original message from the author you need 2 things:\r\n- use the main branch of transformers that contains multiple fixes of accelerate + Trainer integration\r\n- run `accelerate config` --> select multi GPU then run your script with `accelerate launch yourscript.py`. to make sure that only the main process saves the model you can add a simple check in the `model.save_pretrained` and do something like that instead:\r\n```python\r\nif trainer.accelerator.is_main_process:\r\n model.save_pretrained(training_args.output_dir)\r\n```",
"Thanks! I will try these later.",
"Hi @younesbelkada \r\nSorry to bother you again. I'm still working on the \"device_map\" thing... I'm curious how does `transformers` automatically allocate the layers to different GPUs.\r\n\r\nWhen I load the [ChatGLM-6B](https://huggingface.co/THUDM/chatglm-6b/blob/main/modeling_chatglm.py) model, using `device_map=\"auto\"`, I see the layers are allocated to:\r\n```\r\n{'transformer.word_embeddings': 0,\r\n 'lm_head': 0, <-----\r\n 'transformer.layers.0': 0,\r\n 'transformer.layers.1': 0,\r\n 'transformer.layers.2': 0,\r\n 'transformer.layers.3': 0,\r\n 'transformer.layers.4': 0,\r\n 'transformer.layers.5': 1,\r\n 'transformer.layers.6': 1,\r\n 'transformer.layers.7': 1,\r\n 'transformer.layers.8': 1,\r\n 'transformer.layers.9': 1,\r\n 'transformer.layers.10': 1,\r\n 'transformer.layers.11': 1,\r\n 'transformer.layers.12': 1,\r\n 'transformer.layers.13': 1,\r\n 'transformer.layers.14': 2,\r\n 'transformer.layers.15': 2,\r\n 'transformer.layers.16': 2,\r\n 'transformer.layers.17': 2,\r\n 'transformer.layers.18': 2,\r\n 'transformer.layers.19': 2,\r\n 'transformer.layers.20': 2,\r\n 'transformer.layers.21': 2,\r\n 'transformer.layers.22': 2,\r\n...\r\n 'transformer.layers.24': 3,\r\n 'transformer.layers.25': 3,\r\n 'transformer.layers.26': 3,\r\n 'transformer.layers.27': 3,\r\n 'transformer.final_layernorm': 3}\r\n```\r\n\r\nAnd when I change the model to [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b/blob/main/modeling_chatglm.py), the allocation is:\r\n```\r\n{'transformer.embedding': 0,\r\n 'transformer.rotary_pos_emb': 0,\r\n 'transformer.encoder.layers.0': 0,\r\n 'transformer.encoder.layers.1': 0,\r\n 'transformer.encoder.layers.2': 0,\r\n 'transformer.encoder.layers.3': 0,\r\n 'transformer.encoder.layers.4': 0,\r\n 'transformer.encoder.layers.5': 0,\r\n 'transformer.encoder.layers.6': 1,\r\n 'transformer.encoder.layers.7': 1,\r\n 'transformer.encoder.layers.8': 1,\r\n 'transformer.encoder.layers.9': 1,\r\n 'transformer.encoder.layers.10': 1,\r\n 'transformer.encoder.layers.11': 1,\r\n 'transformer.encoder.layers.12': 1,\r\n 'transformer.encoder.layers.13': 1,\r\n 'transformer.encoder.layers.14': 2,\r\n 'transformer.encoder.layers.15': 2,\r\n 'transformer.encoder.layers.16': 2,\r\n 'transformer.encoder.layers.17': 2,\r\n 'transformer.encoder.layers.18': 2,\r\n 'transformer.encoder.layers.19': 2,\r\n 'transformer.encoder.layers.20': 2,\r\n 'transformer.encoder.layers.21': 2,\r\n 'transformer.encoder.layers.22': 3,\r\n...\r\n 'transformer.encoder.layers.25': 3,\r\n 'transformer.encoder.layers.26': 3,\r\n 'transformer.encoder.layers.27': 3,\r\n 'transformer.encoder.final_layernorm': 3,\r\n 'transformer.output_layer': 3} <-----\r\n```\r\n\r\nMy question is, the `lm_head` layer in ChatGLM-6B and the `output_layer` in ChatGLM2-6B are both the **last** layer of the models, but why `lm_head` is in `cuda:0` (same as the input layer), the `output_layer` is put in `cuda:3` (different from the input layer).\r\n\r\nBecause of this, when I train the ChatGLM-6B, every things is fine; but when I train the ChatGLM2-6B, an error occurs during the model forward pass loss computing:\r\n`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward)`\r\n\r\nDo you know what's the problem? How can I fix this? Many thanks!\r\n\r\n---\r\nupdate:\r\n\r\nI have a workaround (which I think is too ugly, lol):\r\n```python\r\nmodel.hf_device_map['transformer.output_layer'] = model.hf_device_map['transformer.embedding']\r\nmodel = AutoModel.from_pretrained(\"THUDM/chatglm2-6b\", trust_remote_code=True, device_map=model.hf_device_map)\r\n```\r\nwhich is to manually change the `output_layer`'s device, and reload the model.\r\n",
"Hi @beyondguo \r\nThanks for the ping, and no problem at all\r\n`device_map='auto'` will dispatch the model evenly across all available GPUs.\r\nI think the issue you are facing is related to the fact that for the first model the weight is probably tied with the embedding layer (i.e. they are the same), hence the device of that layer being on the first GPU device. For the second model, maybe the lm_head is not tied to the embedding layer. Regarding your solution, I think it looks fine, you can probably load the first model on the meta device using `init_empty_weights()` context manager from accelerate and make it slightly more efficient.\r\nThanks!",
"Hey, Ive tried \"everything\" now, but cant get 8bit lora multi-gpu training to work. I have a minimal example here:\r\n\r\nhttps://gist.github.com/simeneide/80aa37108474aa32b82cb7258778287b\r\n\r\nAlso tried the `device_map={'':torch.cuda.current_device()}` trick above without success. Not really sure what you are doing, @beyondguo ?\r\n\r\nAnyone? Im getting desperate 😂 \r\n\r\n```\r\ntransformers==4.31\r\nbitsandbytes==0.41.1\r\naccelerate==0.21.0\r\ntorch == 2.0.1\r\n```",
"Hi @simeneide \r\n\r\nThanks for the ping, can you try out the solution proposed in this comment: https://github.com/huggingface/accelerate/issues/1840#issuecomment-1683105994 ?",
"I dont hope the ping was during sleeping hours 😬 \r\n\r\nYes, that worked. Thank you very much!",
"Hahah no worries it wasn't ! Great that the solution worked! :D "
] | 1,677
| 1,692
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.4.15-1.el7.elrepo.x86_64-x86_64-with-glibc2.27
- Python version: 3.10.9
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I got this error when finetuning "EleutherAI/gpt-j-6B" using LoRA on 8×2080ti:
`RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1`
Reproduce steps:
clone this repo: https://github.com/CarperAI/trlx
modify the script: examples/summarize_rlhf/sft/train_gptj_summarize.py
```
import random
import os
import evaluate
import numpy as np
import torch
import torch.nn as nn
from peft import LoraConfig, get_peft_model
from summarize_dataset import TLDRDataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
Trainer,
TrainingArguments,
default_data_collator,
)
def set_seed(seed_val=42):
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
if __name__ == "__main__":
output_dir = "gptj-supervised-summarize-checkpoint"
train_batch_size = 4
gradient_accumulation_steps = 1
learning_rate = 1e-5
eval_batch_size = 1
eval_steps = 500
max_input_length = 550
save_steps = 1000
num_train_epochs = 5
random.seed(42)
os.environ["WANDB_DISABLED"] = "true"
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", use_cache=False, load_in_8bit=True, device_map='auto')
tokenizer.pad_token = tokenizer.eos_token
model.resize_token_embeddings(len(tokenizer))
tokenizer.pad_token_id = tokenizer.eos_token_id
model.config.end_token_id = tokenizer.eos_token_id
model.config.pad_token_id = model.config.eos_token_id
for param in model.parameters():
param.requires_grad = False # freeze the model - train adapters later
if param.ndim == 1:
# cast the small parameters (e.g. layernorm) to fp32 for stability
param.data = param.data.to(torch.float32)
model.gradient_checkpointing_enable()
model.enable_input_require_grads()
class CastOutputToFloat(nn.Sequential):
def forward(self, x): return super().forward(x).to(torch.float32)
model.lm_head = CastOutputToFloat(model.lm_head)
config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config)
# Set up the datasets
data_path = "CarperAI/openai_summarize_tldr"
train_dataset = TLDRDataset(
data_path,
tokenizer,
"train",
max_length=max_input_length,
)
dev_dataset = TLDRDataset(
data_path,
tokenizer,
"valid",
max_length=max_input_length,
)
# Set up the metric
rouge = evaluate.load("rouge")
def compute_metrics(eval_preds):
labels_ids = eval_preds.label_ids
pred_ids = eval_preds.predictions
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
result = rouge.compute(predictions=pred_str, references=label_str)
return result
# Create a preprocessing function to extract out the proper logits from the model output
def preprocess_logits_for_metrics(logits, labels):
if isinstance(logits, tuple):
logits = logits[0]
return logits.argmax(dim=-1)
# Prepare the trainer and start training
training_args = TrainingArguments(
output_dir=output_dir,
evaluation_strategy="steps",
eval_accumulation_steps=1,
learning_rate=learning_rate,
per_device_train_batch_size=train_batch_size,
per_device_eval_batch_size=eval_batch_size,
gradient_checkpointing=True,
half_precision_backend="auto",
fp16=True,
adam_beta1=0.9,
adam_beta2=0.95,
gradient_accumulation_steps=gradient_accumulation_steps,
num_train_epochs=num_train_epochs,
warmup_steps=100,
eval_steps=eval_steps,
save_steps=save_steps,
load_best_model_at_end=True,
logging_steps=50,
# deepspeed="examples/summarize_rlhf/sft/ds_config_gptj.json",
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=dev_dataset,
compute_metrics=compute_metrics,
data_collator=default_data_collator,
preprocess_logits_for_metrics=preprocess_logits_for_metrics,
)
trainer.train()
trainer.save_model(output_dir)
```
and run:
`accelerate launch --num_processes 8 examples/summarize_rlhf/sft/train_gptj_summarize.py`
Full error logs:
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /data/trlx/examples/summarize_rlhf/sft/train_gptj_summarize_lora_acc.py:154 in <module> │
│ │
│ 151 │ │ data_collator=default_data_collator, │
│ 152 │ │ preprocess_logits_for_metrics=preprocess_logits_for_metrics, │
│ 153 │ ) │
│ ❱ 154 │ trainer.train() │
│ 155 │ trainer.save_model(output_dir) │
│ 156 │
│ │
│ /data/transformers/src/transformers/trainer.py:1631 in train │
│ │
│ 1628 │ │ inner_training_loop = find_executable_batch_size( │
│ 1629 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1630 │ │ ) │
│ ❱ 1631 │ │ return inner_training_loop( │
│ 1632 │ │ │ args=args, │
│ 1633 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1634 │ │ │ trial=trial, │
│ │
│ /data/transformers/src/transformers/trainer.py:1898 in _inner_training_loop │
│ │
│ 1895 │ │ │ │ │ with model.no_sync(): │
│ 1896 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │
│ 1897 │ │ │ │ else: │
│ ❱ 1898 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │
│ 1899 │ │ │ │ │
│ 1900 │ │ │ │ if ( │
│ 1901 │ │ │ │ │ args.logging_nan_inf_filter │
│ │
│ /data/transformers/src/transformers/trainer.py:2643 in training_step │
│ │
│ 2640 │ │ │ return loss_mb.reduce_mean().detach().to(self.args.device) │
│ 2641 │ │ │
│ 2642 │ │ with self.compute_loss_context_manager(): │
│ ❱ 2643 │ │ │ loss = self.compute_loss(model, inputs) │
│ 2644 │ │ │
│ 2645 │ │ if self.args.n_gpu > 1: │
│ 2646 │ │ │ loss = loss.mean() # mean() to average on multi-gpu parallel training │
│ │
│ /data/transformers/src/transformers/trainer.py:2675 in compute_loss │
│ │
│ 2672 │ │ │ labels = inputs.pop("labels") │
│ 2673 │ │ else: │
│ 2674 │ │ │ labels = None │
│ ❱ 2675 │ │ outputs = model(**inputs) │
│ 2676 │ │ # Save past state if it exists │
│ 2677 │ │ # TODO: this needs to be fixed and made cleaner later. │
│ 2678 │ │ if self.args.past_index >= 0: │
│ │
│ /home/chenmingrui/miniconda3/envs/petals/lib/python3.10/site-packages/torch/nn/modules/module.py │
│ :1194 in _call_impl │
│ │
│ 1191 │ │ # this function, and just call forward. │
│ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │
│ 1195 │ │ # Do not call functions when jit is used │
│ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/chenmingrui/miniconda3/envs/petals/lib/python3.10/site-packages/torch/nn/parallel/data_par │
│ allel.py:157 in forward │
│ │
│ 154 │ │ │ │
│ 155 │ │ │ for t in chain(self.module.parameters(), self.module.buffers()): │
│ 156 │ │ │ │ if t.device != self.src_device_obj: │
│ ❱ 157 │ │ │ │ │ raise RuntimeError("module must have its parameters and buffers " │
│ 158 │ │ │ │ │ │ │ │ │ "on device {} (device_ids[0]) but found one of " │
│ 159 │ │ │ │ │ │ │ │ │ "them on device: {}".format(self.src_device_obj, │
│ 160 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1
```
### Expected behavior
I'm using 8×2080ti.
When training using 1×2080ti and running `python examples/summarize_rlhf/sft/train_gptj_summarize.py`, the above code runs normally, which means the model and data can fit in only one gpu. Then I want to use data parallelism and do not use model parallelism, just like DDP.
The `load_in_8bit` option in `.from_pretrained()` requires setting `device_map` option. With `device_map='auto'`, it seems that the model is loaded on several gpus, as in naive model parallelism, which results in this error: `RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1` while training.
May be setting `device_map` correctly should solve this problem, but I can't find how to do this in document.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21736/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21735
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21735/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21735/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21735/events
|
https://github.com/huggingface/transformers/pull/21735
| 1,594,878,434
|
PR_kwDOCUB6oc5KgI-y
| 21,735
|
Fix resume_from_checkpoint for deepspeed
|
{
"login": "mosheber",
"id": 22236370,
"node_id": "MDQ6VXNlcjIyMjM2Mzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/22236370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mosheber",
"html_url": "https://github.com/mosheber",
"followers_url": "https://api.github.com/users/mosheber/followers",
"following_url": "https://api.github.com/users/mosheber/following{/other_user}",
"gists_url": "https://api.github.com/users/mosheber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mosheber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mosheber/subscriptions",
"organizations_url": "https://api.github.com/users/mosheber/orgs",
"repos_url": "https://api.github.com/users/mosheber/repos",
"events_url": "https://api.github.com/users/mosheber/events{/privacy}",
"received_events_url": "https://api.github.com/users/mosheber/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21735). All of your documentation changes will be reflected on that endpoint.",
"Hi @mosheber! The CI is not triggered. It seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you first try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?\r\n\r\nThank you, and let me know if the CI could be triggered after this :-)",
"> \r\n\r\n@ydshieh @sgugger , thank you for approving! For some reason the CircleCI still wont run properly, I tried logging out, revoking, logging back in, refreshing, yet to avail. Could you perform the merge on your end? Or perhaps trigger the CI run? ",
"Hi @mosheber Thank you for trying. I will push an empty commit to your PR branch to trigger CI - are you OK with it?",
"Well, I think one more step to follow from your side: (as I am not able to trigger even with a push)\r\n\r\n\r\nCould you check if you are following huggingface/transformers instead of your own fork. You can check it at this link\r\nhttps://app.circleci.com/projects/project-dashboard/github/mosheber/\r\n\r\nIf you are following your own fork, you have to unfollow it, and follow `huggingface/transformers` instead.\r\n\r\n> If a user submits a pull request to your repository from a fork, but no pipeline is triggered, then the user most likely is following a project fork on their personal account rather than the project itself of CircleCI, causing the jobs to trigger under the user’s personal account and not the organization account. To resolve this issue, have the user unfollow their fork of the project on CircleCI and instead follow the source project. This will trigger their jobs to run under the organization when they submit pull requests.\r\n\r\n\r\nhttps://circleci.com/docs/oss/#build-pull-requests-from-forked-repositories",
"If you are OK, I can also fork your PR, create a new one, but add your name as a contributor then merge the new PR. This might be easier.",
"> If you are OK, I can also fork your PR, create a new one, but add your name as a contributor then merge the new PR. This might be easier.\r\n\r\nThis seems to be the easiest approach, lets go with that",
"well, while I pushed to `huggingface/transformers`, the CI here is triggered ... let's see.",
"> well, while I pushed to `huggingface/transformers`, the CI here is triggered ... let's see.\r\n\r\nLooks like all checks have passed, thanks! Merging should now be possible ",
"> > well, while I pushed to `huggingface/transformers`, the CI here is triggered ... let's see.\r\n> \r\n> Looks like all checks have passed, thanks! Merging should now be possible\r\n\r\nYeah, but see my comment\r\n\r\nhttps://github.com/huggingface/transformers/pull/21735#discussion_r1117246877",
"> Looks like all checks have passed, thanks! Merging should now be possible\r\n\r\nBut it's not ready. Please revisit: https://github.com/huggingface/transformers/pull/21735#discussion_r1116267795",
"> > Looks like all checks have passed, thanks! Merging should now be possible\r\n> \r\n> But it's not ready. Please revisit: [#21735 (comment)](https://github.com/huggingface/transformers/pull/21735#discussion_r1116267795)\r\n\r\nSure thing, I removed it and changed the elif",
"Thanks Moshe, I need to run offline tests since deepspeed tests don't run on live CI (need gpus), and will merge once all is green there.",
"@ydshieh - do you by chance have any idea why CI isn't running? This time it appears to be some other problem than the original one we discussed in this issue. Thank you!",
"Moshe, CircleCI doesn't like something about your CircleCI account settings. Since there is nothing I can do about it and I don't want this to drag forever I've recreated your PR here https://github.com/huggingface/transformers/pull/21798 - so let's finish it there. Thank you.\r\n",
"> Moshe, CircleCI doesn't like something about your CircleCI account settings. Since there is nothing I can do about it and I don't want this to drag forever I've recreated your PR here #21798 - so let's finish it there. Thank you.\r\n\r\nNo problem. Just in case, I also tried to trigger another CI run after unfollowing the project as suggested here:\r\n\r\nhttps://support.circleci.com/hc/en-us/articles/360008097173-Troubleshooting-why-pull-requests-are-not-triggering-jobs-on-my-organization-\r\n\r\nMaybe it will help too",
"Thank you for digging this up, Moshe. The relevant quote seems to be:\r\n\r\n> If you're following the fork instead of the upstream repo\r\n> \r\n> A user who submits a pull request to your repository from a fork, but no pipeline is triggered with the pull request. This can happen when the user is following the project fork on their personal account rather than the project itself on CircleCI.\r\n> \r\n> This will cause the jobs to trigger under the user's personal account. If the user is following a fork of the repository on CircleCI, we will only build on that fork and not the parent, so the parent’s PR will not get status updates. \r\n> \r\n> In these cases, the user unfollows their fork of the project on CircleCI. This will trigger their jobs to run under the organization when they submit pull requests. Those users can optionally follow the source project if they wish to see the pipelines.\r\n\r\nwhich as you said you did and the CI has started. Excellent work - now we would know what to tell the users in the future.",
"> \r\n\r\nGlad I could help! Thanks! "
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
@stas00 This fixes the resume_from_checkpoint for deepspeed, by ensuring that the deepspeed engine is the one to load the checkpoint.
# What does this PR do?
It disables the regular load_from_checkpoint, and allowing it to go to the deepspeed engine instead.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21735/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21735/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21735",
"html_url": "https://github.com/huggingface/transformers/pull/21735",
"diff_url": "https://github.com/huggingface/transformers/pull/21735.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21735.patch",
"merged_at": 1677353454000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21734
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21734/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21734/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21734/events
|
https://github.com/huggingface/transformers/issues/21734
| 1,594,837,538
|
I_kwDOCUB6oc5fD0oi
| 21,734
|
Discrepancies between scores shapes between doc/generate methods
|
{
"login": "icannos",
"id": 1743026,
"node_id": "MDQ6VXNlcjE3NDMwMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1743026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/icannos",
"html_url": "https://github.com/icannos",
"followers_url": "https://api.github.com/users/icannos/followers",
"following_url": "https://api.github.com/users/icannos/following{/other_user}",
"gists_url": "https://api.github.com/users/icannos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/icannos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/icannos/subscriptions",
"organizations_url": "https://api.github.com/users/icannos/orgs",
"repos_url": "https://api.github.com/users/icannos/repos",
"events_url": "https://api.github.com/users/icannos/events{/privacy}",
"received_events_url": "https://api.github.com/users/icannos/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @icannos 👋 Here the problem is the documentation, which is incomplete :) \r\n\r\nHave a look at my answer in our forum, for a similar question -- https://discuss.huggingface.co/t/t5-why-do-we-have-more-tokens-expressed-via-cross-attentions-than-the-decoded-sequence/31893",
"Updating the docs is in our TODO list!",
"> Hey @icannos wave Here the problem is the documentation, which is incomplete :)\r\n> \r\n> Have a look at my answer in our forum, for a similar question -- https://discuss.huggingface.co/t/t5-why-do-we-have-more-tokens-expressed-via-cross-attentions-than-the-decoded-sequence/31893\r\n\r\nThanks for your quick reply but I'm not sure it answers my problem. It is not about the length of the generated sequences but about the number of beams reported.\r\n\r\nI believe the behavior should be identical in both the examples I gave. I don't understand why `num_return_sequences` is at all involved in the shape of scores. Moreover, it is not involved when there is not sampling but is there when sampling is involved.",
"Hey @icannos -- my apologies. I was multitasking and did not read your issue with proper attention. You're right, my answer does not address your question.\r\n\r\nThe difference in behavior you describe stems from these two lines: [beam search](https://github.com/huggingface/transformers/blob/82e61f34451dbea2de8d2220d51b0609d605dfd7/src/transformers/generation/utils.py#L1470) // [beam sample](https://github.com/huggingface/transformers/blob/82e61f34451dbea2de8d2220d51b0609d605dfd7/src/transformers/generation/utils.py#L1507). According to the code specified there, the behavior you see is correct (even if not properly documented!).\r\n\r\nThis means that `beam_sample` runs `num_return_sequences` independent beam searches for a given input (with the next token being selected from sampling, rather than simply the most likely token) and keeps the top beam in each one, as opposed to simply drawing the top `num_return_sequences` beams from one beam search, resulting in `num_return_sequences` times more `scores`. This is done to increase diversity in the output: the top `num_return_sequences` of a given beam search tend to be very similar to each other.\r\n\r\nDoes this answer your questions? 🤗",
"Yes I had found those lines, but I did think it was a mistake or misleading behavior. \r\n\r\nI don't know how it could be done, but I feel like it would be good that the behaviors were the same for all generation methods.\r\n\r\nSo I guess, if I want the sequences of probability distribution for each sequence I should reshape the scores to be `batch_size, num_return_sequences, num_beam, vocab_size` and retrieve `scores[:, :, 0, :]` right ?\r\n\r\nThank for your help! \r\n",
"@icannos \r\n\r\n> I don't know how it could be done, but I feel like it would be good that the behaviors were the same for all generation methods.\r\n\r\nYeah... it's hard to balance a consistent API with consistent results. In this particular case, they are at odds with each other :( We favor consistent results in an attempt to maximize usefulness, as only advanced users tend to fiddle with these details -- how can a beginner know that gathering multiple beams from a single stochastic beam search is a bad idea? 😛 \r\n\r\n> So I guess, if I want the sequences of probability distribution for each sequence I should reshape the scores to be batch_size, num_return_sequences, num_beam, vocab_size and retrieve scores[:, :, 0, :] right ?\r\n\r\nEither that or slicing, yes ",
"Thanks a lot for your help, I'll close the issue.\r\n\r\nYeah I get the results driven API. I hope at least this issue might help someone in the future stumbling upon these problems.",
"@gante \r\nI would like to reopen this issue, I'm getting it even with greedy decoding (which should not run beam search AFAIK). \r\n\r\nWhen calling `output = model.generate(**input, output_scores=True, renormalize_logits=True, max_new_tokens=3, min_new_tokens=1, do_sample=False, return_dict_in_generate=True)`, on accessing `output['scores']`, I get 3 tensors, each of which are of shape (batch_size, 32128). This does not make much sense because flan-t5 only has 32100 tokens in its vocab. \r\n\r\nIt's not clear where these extra scores are coming from, and which ones I should ignore?",
"Hey @adivekar-utexas 👋 \r\n\r\nMany models run computations with embedding sizes larger than the actual vocabulary size, for speed purposes (e.g. [see here why](https://twitter.com/karpathy/status/1621578354024677377)). Any time you see an embedding size larger than the vocabulary size, you can safely discard the tokens whose index is beyond the vocabulary size :)",
"Thanks for the confirmation @gante !\r\n\r\nFor reference for others finding this issue, the necessary fix is to truncate the vocab dimension to `len(tokenizer.get_vocab())`, i.e. `scores_at_timesteps: List[Tensor] = [scores_at_timestep[:, len(tokenizer.get_vocab())] for scores_at_timestep in output['scores']]`\r\n\r\n`tokenizer.vocab_size` sometimes does not take into account special tokes. "
] | 1,677
| 1,681
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-6.0.19-4-MANJARO-x86_64-with-glibc2.37
- Python version: 3.10.9
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I use the generate method to retrieve the probability distributions at each step of the generation. I rely on `scores` to get them.
I noticed discrepancies between the shape of `scores` between the doc, the actual behavior and between different generation configuration (differences that should not change anything, I believe).
The following snippet highlight the main problem:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-base")
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-base")
string_inputs = [
"translate in french: I love cats",
"Answer the question: what is 3+5 ?"
]
inputs = tokenizer(string_inputs, padding=True, truncation=True, return_tensors='pt')
outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, num_beams=7, num_return_sequences=5)
print(type(outputs))
print(outputs.scores[0].shape)
# Output:
# <class 'transformers.generation.utils.BeamSearchEncoderDecoderOutput'>
# torch.Size([14, 32128])
# This is consistent with the doc and is the expected behavior: 14 = (batch_size=2)*(num_beams=7)
# https://huggingface.co/docs/transformers/internal/generation_utils#transformers.generation.BeamSearchEncoderDecoderOutput
outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, num_beams=7, num_return_sequences=5, do_sample=True)
print(type(outputs))
print(outputs.scores[0].shape)
# Output:
# <class 'transformers.generation.utils.BeamSampleEncoderDecoderOutput'>
# torch.Size([70, 32128])
# This is not consistent with the documentation seems not to be the expected behavior:
# 70 = (batch_size=2)*(num_beams=7)*(num_return_sequences=5)
# https://huggingface.co/docs/transformers/internal/generation_utils#transformers.generation.BeamSampleEncoderDecoderOutput
```
It is worth pointing out that in the documentation, the shape `batch_size*num_beams*num_return_sequences` is to be expected with DecoderOnlyOutput; however I don't understand why there would be a difference, nor what the `num_return_sequences` does there anyway.
### Expected behavior
The shape of an element of scores should always be `(batch_size*num_beam, vocab_size)`, and not `(batch_size*num_beam*num_return_sequences, vocab_size)`.
If I missed something and I'm, in fact mistaken, at the very least the documentation is not consistent with the actual behavior.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21734/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21734/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21733
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21733/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21733/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21733/events
|
https://github.com/huggingface/transformers/pull/21733
| 1,594,717,386
|
PR_kwDOCUB6oc5KfnGV
| 21,733
|
[`GPTNeo`] Fix gradient checkpointing bug
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @gante ",
"Change looks reasonable to me. Could you provide links to the check you refer to where `use_cache` is set to False and also add the error output so this solution can be found easily if encountered again please? ",
"Sure, here is the traceback: \r\n```bash\r\n│ /home/younes_huggingface_co/code/transformers/src/transformers/generation/utils.py:1402 in │\r\n│ generate │\r\n│ │\r\n│ 1399 │ │ │ │ ) │\r\n│ 1400 │ │ │ │\r\n│ 1401 │ │ │ # 11. run greedy search │\r\n│ ❱ 1402 │ │ │ return self.greedy_search( │\r\n│ 1403 │ │ │ │ input_ids, │\r\n│ 1404 │ │ │ │ logits_processor=logits_processor, │\r\n│ 1405 │ │ │ │ stopping_criteria=stopping_criteria, │\r\n│ │\r\n│ /home/younes_huggingface_co/code/transformers/src/transformers/generation/utils.py:2197 in │\r\n│ greedy_search │\r\n│ │\r\n│ 2194 │ │ │ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) │\r\n│ 2195 │ │ │ │\r\n│ 2196 │ │ │ # forward pass to get next token │\r\n│ ❱ 2197 │ │ │ outputs = self( │\r\n│ 2198 │ │ │ │ **model_inputs, │\r\n│ 2199 │ │ │ │ return_dict=True, │\r\n│ 2200 │ │ │ │ output_attentions=output_attentions, │\r\n│ │\r\n│ /home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/site-packages/torch/nn/module │\r\n│ s/module.py:1194 in _call_impl │\r\n│ │\r\n│ 1191 │ │ # this function, and just call forward. │\r\n│ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │\r\n│ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │\r\n│ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │\r\n│ 1195 │ │ # Do not call functions when jit is used │\r\n│ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │\r\n│ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │\r\n│ │\r\n│ /home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/site-packages/accelerate/hook │\r\n│ s.py:158 in new_forward │\r\n│ │\r\n│ 155 │ │ │ with torch.no_grad(): │\r\n│ 156 │ │ │ │ output = old_forward(*args, **kwargs) │\r\n│ 157 │ │ else: │\r\n│ ❱ 158 │ │ │ output = old_forward(*args, **kwargs) │\r\n│ 159 │ │ return module._hf_hook.post_forward(module, output) │\r\n│ 160 │ │\r\n│ 161 │ module.forward = new_forward │\r\n│ │\r\n│ /home/younes_huggingface_co/code/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.p │\r\n│ y:739 in forward │\r\n│ │\r\n│ 736 │ │ \"\"\" │\r\n│ 737 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │\r\n│ 738 │ │ │\r\n│ ❱ 739 │ │ transformer_outputs = self.transformer( │\r\n│ 740 │ │ │ input_ids, │\r\n│ 741 │ │ │ past_key_values=past_key_values, │\r\n│ 742 │ │ │ attention_mask=attention_mask, │\r\n│ │\r\n│ /home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/site-packages/torch/nn/module │\r\n│ s/module.py:1194 in _call_impl │\r\n│ │\r\n│ 1191 │ │ # this function, and just call forward. │\r\n│ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │\r\n│ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │\r\n│ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │\r\n│ 1195 │ │ # Do not call functions when jit is used │\r\n│ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │\r\n│ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │\r\n│ │\r\n│ /home/younes_huggingface_co/code/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.p │\r\n│ y:545 in forward │\r\n│ │\r\n│ 542 │ │ │ past_length = 0 │\r\n│ 543 │ │ │ past_key_values = tuple([None] * len(self.h)) │\r\n│ 544 │ │ else: │\r\n│ ❱ 545 │ │ │ past_length = past_key_values[0][0].size(-2) │\r\n│ 546 │ │ │\r\n│ 547 │ │ if position_ids is None: │\r\n│ 548 │ │ │ position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtyp │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nIndexError: tuple index out of range\r\n```\r\n\r\nAnd `use_cache` is force-set to False [here](https://github.com/huggingface/transformers/blob/aff87da15b04b260c6057dd47a70376f2b2386f3/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L515) but `presents` are already initialized [here](https://github.com/huggingface/transformers/blob/aff87da15b04b260c6057dd47a70376f2b2386f3/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L503), hence returned [here](https://github.com/huggingface/transformers/blob/aff87da15b04b260c6057dd47a70376f2b2386f3/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L555)\r\n",
"The reason for the change sounds good 👍 \r\n\r\nTwo comments, though:\r\n1. We can solve this problem without adding more logic -- e.g. moving `if` that sets `use_cache=False` up in the class, before `presents` is initialized. Less logic = simpler code = fewer bugs = 💛 \r\n2. This is a widespread pattern in generate-compatible models. If we make this change, the least we should do is open an issue with the `Good First Issue` label that tracks which models have already received the change!",
"Thanks a lot for these suggestions @gante! This makes a lot of sense, \r\nI have adapted the code accordingly and drafted a Good First issue that we can edit once we figure out if the bug persists for other models",
"I'm getting same error in t5, is this the same reason ?\r\n\r\nFile /usr/local/lib/python3.8/dist-packages/transformers/models/t5/modeling_t5.py:506, in T5Attention.forward.<locals>.project(hidden_states, proj_layer, key_value_states, past_key_value)\r\n 502 if past_key_value is not None:\r\n 503 if key_value_states is None:\r\n 504 # self-attn\r\n 505 # (batch_size, n_heads, key_length, dim_per_head)\r\n--> 506 hidden_states = torch.cat([past_key_value, hidden_states], dim=2)\r\n 507 elif past_key_value.shape[2] != key_value_states.shape[1]:\r\n 508 # checking that the `sequence_length` of the `past_key_value` is the same as\r\n 509 # the provided `key_value_states` to support prefix tuning\r\n 510 # cross-attn\r\n 511 # (batch_size, n_heads, seq_length, dim_per_head)\r\n 512 hidden_states = shape(proj_layer(key_value_states))\r\n\r\nRuntimeError: Sizes of tensors must match except in dimension 2. Expected size 16 but got size 32 for tensor number 1 in the list.",
"Not sure in this case, can you share a reproducible script?"
] | 1,677
| 1,678
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a small bug that a user can encounter while using `generate` and models that use `gradient_checkpointing` . In the context of `trl` we call `generate` on the active model that uses `gradient_checkpointing` to save memory.
Currently this snippet fails on `main`:
```python
import torch
from transformers import AutoModelForCausalLM
# Now let's build the model, the reference model, and the tokenizer.
model = AutoModelForCausalLM.from_pretrained("edbeeching/gpt-neo-125M-imdb", device_map="auto")
model.train()
model.gradient_checkpointing_enable()
gen = model.generate(input_ids=torch.LongTensor([0,1,2,3]).unsqueeze(0))
```
This is because `gradient_checkpointing_enable` and `use_cache` are not compatible, and calling `generate` uses `use_cache` by default. Though there is a check to force `use_cache` to be set to `False`, this check is not enough since the `present` tuple will be still tried to be populated but remains empty since it will be intialized with an empty tuple, Hence leading to a confusing error.
IMO the fix should be to force-set `presents` tuple to `None` if the model is using `gradient_checkpointing`
cc @sgugger @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21733/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21733",
"html_url": "https://github.com/huggingface/transformers/pull/21733",
"diff_url": "https://github.com/huggingface/transformers/pull/21733.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21733.patch",
"merged_at": 1677142100000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21732
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21732/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21732/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21732/events
|
https://github.com/huggingface/transformers/issues/21732
| 1,594,235,942
|
I_kwDOCUB6oc5fBhwm
| 21,732
|
AttributeError: 'GenerationConfig' object has no attribute 'architectures'
|
{
"login": "Ccode-lang",
"id": 78437178,
"node_id": "MDQ6VXNlcjc4NDM3MTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/78437178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ccode-lang",
"html_url": "https://github.com/Ccode-lang",
"followers_url": "https://api.github.com/users/Ccode-lang/followers",
"following_url": "https://api.github.com/users/Ccode-lang/following{/other_user}",
"gists_url": "https://api.github.com/users/Ccode-lang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ccode-lang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ccode-lang/subscriptions",
"organizations_url": "https://api.github.com/users/Ccode-lang/orgs",
"repos_url": "https://api.github.com/users/Ccode-lang/repos",
"events_url": "https://api.github.com/users/Ccode-lang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ccode-lang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Until this is fixed I'll use the decapitated method.",
"Hello @Ccode-lang \r\nThanks for the issue , I think the correct keyword argument here is `generate_config`: \r\n```python\r\nfrom transformers import pipeline, set_seed, GenerationConfig\r\n\r\nconfig = GenerationConfig(max_new_tokens=500, temperature=1.2, num_return_sequences=1)\r\ngenerator = pipeline('text-generation', model='gpt2-xl', device=0, generate_config = config)\r\ngenerator(\"Hello\")\r\n```\r\ncc @gante in case I missed something",
"Oh, thanks! That doesn't show up on the docs so I didn't know about this parameter only `config` showed up. I'll test this later.",
"Now I get this error:\r\n```\r\n warnings.warn(\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Cooper Lynn\\gpt\\bot.py\", line 53, in <module>\r\n in_out()\r\n File \"C:\\Users\\Cooper Lynn\\gpt\\bot.py\", line 47, in in_out\r\n response = generator(text)[0]['generated_text'].removeprefix(text).split(\"\\n\")[0][1:]\r\n File \"C:\\Users\\Cooper Lynn\\gpt\\.env\\lib\\site-packages\\transformers\\pipelines\\text_generation.py\", line 210, in __call__\r\n return super().__call__(text_inputs, **kwargs)\r\n File \"C:\\Users\\Cooper Lynn\\gpt\\.env\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 1084, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"C:\\Users\\Cooper Lynn\\gpt\\.env\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 1091, in run_single\r\n model_outputs = self.forward(model_inputs, **forward_params)\r\n File \"C:\\Users\\Cooper Lynn\\gpt\\.env\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 992, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"C:\\Users\\Cooper Lynn\\gpt\\.env\\lib\\site-packages\\transformers\\pipelines\\text_generation.py\", line 252, in _forward\r\n generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)\r\n File \"C:\\Users\\Cooper Lynn\\gpt\\.env\\lib\\site-packages\\torch\\autograd\\grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"C:\\Users\\Cooper Lynn\\gpt\\.env\\lib\\site-packages\\transformers\\generation\\utils.py\", line 1197, in generate\r\n self._validate_model_kwargs(model_kwargs.copy())\r\n File \"C:\\Users\\Cooper Lynn\\gpt\\.env\\lib\\site-packages\\transformers\\generation\\utils.py\", line 1090, in _validate_model_kwargs\r\n raise ValueError(\r\nValueError: The following `model_kwargs` are not used by the model: ['generate_config'] (note: typos in the generate arguments will also show up in this list)\r\n```\r\n",
"I'm going to guess this is because I am using GPT-2?",
"Hey all -- the actual keyword for the argument is `generation_config`, and not `generate_config` :)\r\n\r\n`pipeline()` accepts any argument that `.generate()` does",
"Oh ok, thanks! I'm going to fix this once and for all now :rofl: \r\n",
"Ok that works, thanks for all of your help."
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Run the code below:
```python
from transformers import pipeline, set_seed, GenerationConfig
config = GenerationConfig(max_new_tokens=500, temperature=1.2, num_return_sequences=1)
generator = pipeline('text-generation', model='gpt2-xl', device=0, config = config)
```
2. Get this error:
```
Traceback (most recent call last):
File "C:\Users\Cooper Lynn\gpt\bot.py", line 4, in <module>
generator = pipeline('text-generation', model='gpt2-xl', device=0, config = config)
File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\transformers\pipelines\__init__.py", line 754, in pipeline
framework, model = infer_framework_load_model(
File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\transformers\pipelines\base.py", line 224, in infer_framework_load_model
if config.architectures:
AttributeError: 'GenerationConfig' object has no attribute 'architectures'
```
# Extra info
I can't access the documentation for the `GenerationConfig()` function.
### Expected behavior
No error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21732/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21731
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21731/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21731/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21731/events
|
https://github.com/huggingface/transformers/pull/21731
| 1,593,990,316
|
PR_kwDOCUB6oc5KdM2O
| 21,731
|
Fix `GPTSanJapaneseModel`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
```
return_dict = return_dict if return_dict is not None else self.config.return_dict
```
should be
```
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
```
as in many other places.
Otherwise torchscript tests will fail.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21731/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21731",
"html_url": "https://github.com/huggingface/transformers/pull/21731",
"diff_url": "https://github.com/huggingface/transformers/pull/21731.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21731.patch",
"merged_at": 1677060545000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21730
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21730/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21730/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21730/events
|
https://github.com/huggingface/transformers/pull/21730
| 1,593,802,321
|
PR_kwDOCUB6oc5Kck-e
| 21,730
|
[`MBart`] Fix cross attention mask check
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/21728
The current `MBart` code leads to an error that is hard to interpret for users due to a possible typo as pointed out in https://github.com/huggingface/transformers/issues/21728
To reproduce:
```python
import torch
from transformers import MBartModel
input_ids = torch.LongTensor([[0, 1, 1, 0]])
model = MBartModel.from_pretrained("facebook/mbart-large-cc25")
head_mask = None
cross_attn_head_mask = torch.ones(1000, 1, 1, 1)
model(input_ids, head_mask=head_mask, cross_attn_head_mask=cross_attn_head_mask)
```
This PR fixes this
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21730/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21730",
"html_url": "https://github.com/huggingface/transformers/pull/21730",
"diff_url": "https://github.com/huggingface/transformers/pull/21730.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21730.patch",
"merged_at": 1677054086000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21729
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21729/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21729/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21729/events
|
https://github.com/huggingface/transformers/pull/21729
| 1,593,687,626
|
PR_kwDOCUB6oc5KcMo7
| 21,729
|
Added "Open in Colab" to task guides
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
Some of the task guides did not have the "Open in Colab" option, which can be very useful in this type of docs. This small PR adds the option.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21729/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21729",
"html_url": "https://github.com/huggingface/transformers/pull/21729",
"diff_url": "https://github.com/huggingface/transformers/pull/21729.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21729.patch",
"merged_at": 1677072755000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21728
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21728/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21728/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21728/events
|
https://github.com/huggingface/transformers/issues/21728
| 1,593,621,127
|
I_kwDOCUB6oc5e_LqH
| 21,728
|
Error with shape-assertion regarding head_masks in mbart_modeling file
|
{
"login": "josh-oo",
"id": 22002584,
"node_id": "MDQ6VXNlcjIyMDAyNTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/22002584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josh-oo",
"html_url": "https://github.com/josh-oo",
"followers_url": "https://api.github.com/users/josh-oo/followers",
"following_url": "https://api.github.com/users/josh-oo/following{/other_user}",
"gists_url": "https://api.github.com/users/josh-oo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josh-oo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josh-oo/subscriptions",
"organizations_url": "https://api.github.com/users/josh-oo/orgs",
"repos_url": "https://api.github.com/users/josh-oo/repos",
"events_url": "https://api.github.com/users/josh-oo/events{/privacy}",
"received_events_url": "https://api.github.com/users/josh-oo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Great catch! The PR https://github.com/huggingface/transformers/pull/21730 should add the fix you proposed"
] | 1,676
| 1,677
| 1,677
|
NONE
| null |
### System Info
file: https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/models/mbart/modeling_mbart.py
commit-hash: fd5cdaeea6eafac32e9d967327bfa3dc0e0d962d
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Call the forward method of MBartModel with:
- head_mask=None,
- decoder_head_mask=None,
and
- cross_attn_mask=torch.ones(mismatching_layer_size, number_of_heads)
### Expected behavior
**Expected Output:** The cross_attn_head_mask should be specified for X layers, but it is for Y.
**Actual Output:** 'NoneType' object has no attribute 'size'
**Potential fix:** The head_mask from line 1058 should probably reference the attn_mask from line 1053.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21728/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21727
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21727/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21727/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21727/events
|
https://github.com/huggingface/transformers/pull/21727
| 1,593,607,304
|
PR_kwDOCUB6oc5Kb7qB
| 21,727
|
Fix to KerasMetricCallback when the model returns unstructured output
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I don't even know if we're testing this callback at all, because it's not really a core piece of `transformers`. I'll merge this for now and then think about where we could add some tests in a future PR!"
] | 1,676
| 1,677
| 1,677
|
MEMBER
| null |
The `KerasMetricCallback` was only tested on `transformers` models, which usually return dict-like `ModelOutput`. As a result, I missed a bug when the model is a more classical Keras model that just returns a naked array. Thanks to @leadbetterben for pointing out the issue.
Fixes #21674 .
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21727/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21727",
"html_url": "https://github.com/huggingface/transformers/pull/21727",
"diff_url": "https://github.com/huggingface/transformers/pull/21727.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21727.patch",
"merged_at": 1677071715000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21726
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21726/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21726/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21726/events
|
https://github.com/huggingface/transformers/pull/21726
| 1,593,451,864
|
PR_kwDOCUB6oc5KbaMt
| 21,726
|
Fix `ErnieMEmbeddings` device issue
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Fix `ErnieMEmbeddings` CI failure for
```bash
tests/models/ernie_m/test_modeling_ernie_m.py::ErnieMModelTest::test_multi_gpu_data_parallel_forward
```
See comments in the changes.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21726/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21726",
"html_url": "https://github.com/huggingface/transformers/pull/21726",
"diff_url": "https://github.com/huggingface/transformers/pull/21726.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21726.patch",
"merged_at": 1677059855000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21725
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21725/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21725/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21725/events
|
https://github.com/huggingface/transformers/pull/21725
| 1,593,334,534
|
PR_kwDOCUB6oc5KbA9N
| 21,725
|
Make ImageProcessorMixin compatible with subfolder kwarg
|
{
"login": "Abhinay1997",
"id": 24771261,
"node_id": "MDQ6VXNlcjI0NzcxMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/24771261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abhinay1997",
"html_url": "https://github.com/Abhinay1997",
"followers_url": "https://api.github.com/users/Abhinay1997/followers",
"following_url": "https://api.github.com/users/Abhinay1997/following{/other_user}",
"gists_url": "https://api.github.com/users/Abhinay1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abhinay1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abhinay1997/subscriptions",
"organizations_url": "https://api.github.com/users/Abhinay1997/orgs",
"repos_url": "https://api.github.com/users/Abhinay1997/repos",
"events_url": "https://api.github.com/users/Abhinay1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abhinay1997/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger I have added the test. Let me know your thoughts please."
] | 1,676
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
Adds subfolder support to the ImageProcessorMixin to be able to load models from a specific subfolder on the HuggingFace repo.
See the issue: https://github.com/huggingface/transformers/issues/21715
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21725/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21725",
"html_url": "https://github.com/huggingface/transformers/pull/21725",
"diff_url": "https://github.com/huggingface/transformers/pull/21725.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21725.patch",
"merged_at": 1677140898000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21724
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21724/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21724/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21724/events
|
https://github.com/huggingface/transformers/issues/21724
| 1,593,300,616
|
I_kwDOCUB6oc5e99aI
| 21,724
|
Invalid header value when loading "bert-base-uncased"
|
{
"login": "Lyken17",
"id": 7783214,
"node_id": "MDQ6VXNlcjc3ODMyMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7783214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lyken17",
"html_url": "https://github.com/Lyken17",
"followers_url": "https://api.github.com/users/Lyken17/followers",
"following_url": "https://api.github.com/users/Lyken17/following{/other_user}",
"gists_url": "https://api.github.com/users/Lyken17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lyken17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lyken17/subscriptions",
"organizations_url": "https://api.github.com/users/Lyken17/orgs",
"repos_url": "https://api.github.com/users/Lyken17/repos",
"events_url": "https://api.github.com/users/Lyken17/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lyken17/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Solved via deleting all cached files under ```~/.cache/huggingface```"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
Python 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.10.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: from transformers import AutoTokenizer
In [2]: tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[2], line 1
----> 1 tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
File ~/miniconda3/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:598, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
595 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
597 # Next, let's try to use the tokenizer_config file to get the tokenizer class.
--> 598 tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
599 if "_commit_hash" in tokenizer_config:
600 kwargs["_commit_hash"] = tokenizer_config["_commit_hash"]
File ~/miniconda3/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:442, in get_tokenizer_config(pretrained_model_name_or_path, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, **kwargs)
380 """
381 Loads the tokenizer configuration from a pretrained model tokenizer configuration.
382
(...)
439 tokenizer_config = get_tokenizer_config("tokenizer-test")
440 ```"""
441 commit_hash = kwargs.get("_commit_hash", None)
--> 442 resolved_config_file = cached_file(
443 pretrained_model_name_or_path,
444 TOKENIZER_CONFIG_FILE,
445 cache_dir=cache_dir,
446 force_download=force_download,
447 resume_download=resume_download,
448 proxies=proxies,
449 use_auth_token=use_auth_token,
450 revision=revision,
451 local_files_only=local_files_only,
452 subfolder=subfolder,
453 _raise_exceptions_for_missing_entries=False,
454 _raise_exceptions_for_connection_errors=False,
455 _commit_hash=commit_hash,
456 )
457 if resolved_config_file is None:
458 logger.info("Could not locate the tokenizer configuration file, will try to use the model config instead.")
File ~/miniconda3/lib/python3.10/site-packages/transformers/utils/hub.py:409, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
406 user_agent = http_user_agent(user_agent)
407 try:
408 # Load from URL or cache if already cached
--> 409 resolved_file = hf_hub_download(
410 path_or_repo_id,
411 filename,
412 subfolder=None if len(subfolder) == 0 else subfolder,
413 revision=revision,
414 cache_dir=cache_dir,
415 user_agent=user_agent,
416 force_download=force_download,
417 proxies=proxies,
418 resume_download=resume_download,
419 use_auth_token=use_auth_token,
420 local_files_only=local_files_only,
421 )
423 except RepositoryNotFoundError:
424 raise EnvironmentError(
425 f"{path_or_repo_id} is not a local folder and is not a valid model identifier "
426 "listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to "
427 "pass a token having permission to this repo with `use_auth_token` or log in with "
428 "`huggingface-cli login` and pass `use_auth_token=True`."
429 )
File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
119 if check_use_auth_token:
120 kwargs = smoothly_deprecate_use_auth_token(
121 fn_name=fn.__name__, has_token=has_token, kwargs=kwargs
122 )
--> 124 return fn(*args, **kwargs)
File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/file_download.py:1106, in hf_hub_download(repo_id, filename, subfolder, repo_type, revision, library_name, library_version, cache_dir, user_agent, force_download, force_filename, proxies, etag_timeout, resume_download, token, local_files_only, legacy_cache_layout)
1104 try:
1105 try:
-> 1106 metadata = get_hf_file_metadata(
1107 url=url,
1108 token=token,
1109 proxies=proxies,
1110 timeout=etag_timeout,
1111 )
1112 except EntryNotFoundError as http_error:
1113 # Cache the non-existence of the file and raise
1114 commit_hash = http_error.response.headers.get(
1115 HUGGINGFACE_HEADER_X_REPO_COMMIT
1116 )
File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
119 if check_use_auth_token:
120 kwargs = smoothly_deprecate_use_auth_token(
121 fn_name=fn.__name__, has_token=has_token, kwargs=kwargs
122 )
--> 124 return fn(*args, **kwargs)
File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/file_download.py:1432, in get_hf_file_metadata(url, token, proxies, timeout)
1429 headers = build_hf_headers(token=token)
1431 # Retrieve metadata
-> 1432 r = _request_wrapper(
1433 method="HEAD",
1434 url=url,
1435 headers=headers,
1436 allow_redirects=False,
1437 follow_relative_redirects=True,
1438 proxies=proxies,
1439 timeout=timeout,
1440 )
1441 hf_raise_for_status(r)
1443 # Return
File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/file_download.py:405, in _request_wrapper(method, url, max_retries, base_wait_time, max_wait_time, timeout, follow_relative_redirects, **params)
403 # 2. Force relative redirection
404 if follow_relative_redirects:
--> 405 response = _request_wrapper(
406 method=method,
407 url=url,
408 max_retries=max_retries,
409 base_wait_time=base_wait_time,
410 max_wait_time=max_wait_time,
411 timeout=timeout,
412 follow_relative_redirects=False,
413 **params,
414 )
416 # If redirection, we redirect only relative paths.
417 # This is useful in case of a renamed repository.
418 if 300 <= response.status_code <= 399:
File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/file_download.py:440, in _request_wrapper(method, url, max_retries, base_wait_time, max_wait_time, timeout, follow_relative_redirects, **params)
437 return response
439 # 3. Exponential backoff
--> 440 return http_backoff(
441 method=method,
442 url=url,
443 max_retries=max_retries,
444 base_wait_time=base_wait_time,
445 max_wait_time=max_wait_time,
446 retry_on_exceptions=(ConnectTimeout, ProxyError),
447 retry_on_status_codes=(),
448 timeout=timeout,
449 **params,
450 )
File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/utils/_http.py:129, in http_backoff(method, url, max_retries, base_wait_time, max_wait_time, retry_on_exceptions, retry_on_status_codes, **kwargs)
126 kwargs["data"].seek(io_obj_initial_pos)
128 # Perform request and return if status_code is not in the retry list.
--> 129 response = requests.request(method=method, url=url, **kwargs)
130 if response.status_code not in retry_on_status_codes:
131 return response
File ~/miniconda3/lib/python3.10/site-packages/requests/api.py:59, in request(method, url, **kwargs)
55 # By using the 'with' statement we are sure the session is closed, thus we
56 # avoid leaving sockets open which can trigger a ResourceWarning in some
57 # cases, and look like a memory leak in others.
58 with sessions.Session() as session:
---> 59 return session.request(method=method, url=url, **kwargs)
File ~/miniconda3/lib/python3.10/site-packages/requests/sessions.py:587, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
582 send_kwargs = {
583 "timeout": timeout,
584 "allow_redirects": allow_redirects,
585 }
586 send_kwargs.update(settings)
--> 587 resp = self.send(prep, **send_kwargs)
589 return resp
File ~/miniconda3/lib/python3.10/site-packages/requests/sessions.py:701, in Session.send(self, request, **kwargs)
698 start = preferred_clock()
700 # Send the request
--> 701 r = adapter.send(request, **kwargs)
703 # Total elapsed time of the request (approximately)
704 elapsed = preferred_clock() - start
File ~/miniconda3/lib/python3.10/site-packages/requests/adapters.py:489, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
487 try:
488 if not chunked:
--> 489 resp = conn.urlopen(
490 method=request.method,
491 url=url,
492 body=request.body,
493 headers=request.headers,
494 redirect=False,
495 assert_same_host=False,
496 preload_content=False,
497 decode_content=False,
498 retries=self.max_retries,
499 timeout=timeout,
500 )
502 # Send the request.
503 else:
504 if hasattr(conn, "proxy_pool"):
File ~/miniconda3/lib/python3.10/site-packages/urllib3/connectionpool.py:703, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
700 self._prepare_proxy(conn)
702 # Make the request on the httplib connection object.
--> 703 httplib_response = self._make_request(
704 conn,
705 method,
706 url,
707 timeout=timeout_obj,
708 body=body,
709 headers=headers,
710 chunked=chunked,
711 )
713 # If we're going to release the connection in ``finally:``, then
714 # the response doesn't need to know about the connection. Otherwise
715 # it will also try to release it and we'll have a double-release
716 # mess.
717 response_conn = conn if not release_conn else None
File ~/miniconda3/lib/python3.10/site-packages/urllib3/connectionpool.py:398, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
396 conn.request_chunked(method, url, **httplib_request_kw)
397 else:
--> 398 conn.request(method, url, **httplib_request_kw)
400 # We are swallowing BrokenPipeError (errno.EPIPE) since the server is
401 # legitimately able to close the connection after sending a valid response.
402 # With this behaviour, the received response is still readable.
403 except BrokenPipeError:
404 # Python 3
File ~/miniconda3/lib/python3.10/site-packages/urllib3/connection.py:239, in HTTPConnection.request(self, method, url, body, headers)
237 if "user-agent" not in (six.ensure_str(k.lower()) for k in headers):
238 headers["User-Agent"] = _get_default_user_agent()
--> 239 super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File ~/miniconda3/lib/python3.10/http/client.py:1282, in HTTPConnection.request(self, method, url, body, headers, encode_chunked)
1279 def request(self, method, url, body=None, headers={}, *,
1280 encode_chunked=False):
1281 """Send a complete request to the server."""
-> 1282 self._send_request(method, url, body, headers, encode_chunked)
File ~/miniconda3/lib/python3.10/http/client.py:1323, in HTTPConnection._send_request(self, method, url, body, headers, encode_chunked)
1320 encode_chunked = False
1322 for hdr, value in headers.items():
-> 1323 self.putheader(hdr, value)
1324 if isinstance(body, str):
1325 # RFC 2616 Section 3.7.1 says that text default has a
1326 # default charset of iso-8859-1.
1327 body = _encode(body, 'body')
File ~/miniconda3/lib/python3.10/site-packages/urllib3/connection.py:224, in HTTPConnection.putheader(self, header, *values)
222 """ """
223 if not any(isinstance(v, str) and v == SKIP_HEADER for v in values):
--> 224 _HTTPConnection.putheader(self, header, *values)
225 elif six.ensure_str(header.lower()) not in SKIPPABLE_HEADERS:
226 raise ValueError(
227 "urllib3.util.SKIP_HEADER only supports '%s'"
228 % ("', '".join(map(str.title, sorted(SKIPPABLE_HEADERS))),)
229 )
File ~/miniconda3/lib/python3.10/http/client.py:1260, in HTTPConnection.putheader(self, header, *values)
1257 values[i] = str(one_value).encode('ascii')
1259 if _is_illegal_header_value(values[i]):
-> 1260 raise ValueError('Invalid header value %r' % (values[i],))
1262 value = b'\r\n\t'.join(values)
1263 header = header + b': ' + value
ValueError: Invalid header value b'Bearer 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\n'
```
### Expected behavior
Above scripts should load the tokenizer normally but now it raises `ValueError: Invalid header value b`
I am using Python from latest Miniconda and I only install `torch` `transformers`. Not sure what the cause of the issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21724/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21723
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21723/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21723/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21723/events
|
https://github.com/huggingface/transformers/pull/21723
| 1,593,282,350
|
PR_kwDOCUB6oc5Ka15z
| 21,723
|
Change doc example for `BigBirdForQuestionAnswering`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Checkpoint `"abhinavkulkarni/bigbird-roberta-base-finetuned-squad"` is no longer public or is removed by the user. Use base model `google/bigbird-roberta-base` and not to test against expected outputs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21723/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21723",
"html_url": "https://github.com/huggingface/transformers/pull/21723",
"diff_url": "https://github.com/huggingface/transformers/pull/21723.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21723.patch",
"merged_at": 1677059712000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21722
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21722/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21722/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21722/events
|
https://github.com/huggingface/transformers/pull/21722
| 1,593,149,869
|
PR_kwDOCUB6oc5KaZ8f
| 21,722
|
Remove `gptsan_japanese` from doctest list to avoid GPU OOM
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Remove `gptsan_japanese` from doctest list to avoid GPU OOM (which affects some other model doctesting)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21722/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21722",
"html_url": "https://github.com/huggingface/transformers/pull/21722",
"diff_url": "https://github.com/huggingface/transformers/pull/21722.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21722.patch",
"merged_at": 1677059460000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21721
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21721/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21721/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21721/events
|
https://github.com/huggingface/transformers/issues/21721
| 1,593,106,464
|
I_kwDOCUB6oc5e9OAg
| 21,721
|
700 hp - 1250 hp
|
{
"login": "Marco071086",
"id": 88912522,
"node_id": "MDQ6VXNlcjg4OTEyNTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/88912522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Marco071086",
"html_url": "https://github.com/Marco071086",
"followers_url": "https://api.github.com/users/Marco071086/followers",
"following_url": "https://api.github.com/users/Marco071086/following{/other_user}",
"gists_url": "https://api.github.com/users/Marco071086/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Marco071086/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Marco071086/subscriptions",
"organizations_url": "https://api.github.com/users/Marco071086/orgs",
"repos_url": "https://api.github.com/users/Marco071086/repos",
"events_url": "https://api.github.com/users/Marco071086/events{/privacy}",
"received_events_url": "https://api.github.com/users/Marco071086/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"good thing that this is on GitHub and not on the hub"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
Audi A3 FWD 2.0T
CHANGE
Engine Turbocharger ES#4147635
R410 Turbo Upgrade Kit & Tuning Package
The R410 Turbo Kit was designed for the track day enthusiast looking for increased horsepower and torque throughout the powerband, without sacrificing response and reliability.
End user to work directly with 034Motorsport on tune - While software is included, the ECU will still need to be sent to 034 for the initial flash load*
For vehicles with FSI engines only
Mfg #034-145-1015ECS #ES#4147635
Add to Wish ListTrack & Share 4347.00
Audi
8P (2005-2013)
2.0T
FWD
For vehicles with FSI engines only
ECS Images
Upload
PRODUCT DETAILS
FITS THESE CARS
Product Details
End user to work directly with 034Motorsport on tune - While software is included, the ECU will still need to be sent to 034 for the initial flash load*
For vehicles with FSI engines only
034Motorsport is proud to offer the R410 Hybrid Turbocharger Upgrade Kit for the 8J/8P Audi TT/A3 & MkV Volkswagen GTI/GLI 2.0T FSI!
R410 was designed for the track day enthusiast who desires an improved usable powerband without sacrificing reliability. Providing significant increases in horsepower and torque throughout the powerband, the R410 Turbo Kit shines on track where the factory turbocharger can’t keep up. Consisting of an OEM+ hybrid turbocharger upgrade and 034Motorsport’s proprietary performance software, R410 is the elegant, reliable solution for breathtaking performance on the street or track, at an excellent price point.
At the center of the 034Motorsport R410 Turbo Kit is the LOBA LO410-EA113DV drop-in hybrid turbocharger upgrade. Made in Germany and based on the factory Borg Warner unit, this turbocharger features a state-of-the-art billet compressor wheel to allow for higher flow. The backplate, compressor housing, turbine housing, and exhaust manifold have all been CNC-machined for optimal flow and increased performance. In addition, the LO410-EA113DV turbocharger features an upgraded thrust bearing and is precision-balanced to ensure reliability.
034Motorsport spent a significant amount of time developing and verifying our proprietary software to ensure that R410 delivers consistent, reliable power under grueling track conditions. Through optimization of the factory ECU's boost, fueling, and timing maps, the R410 Tuning Package brings out the potential of the LO410-EA113DV Turbocharger. Peak boost ranges from 20-23 PSI (octane dependent) and tapers to 18 PSI by the new 7,100 RPM redline to keep the turbo running at its optimum efficiency. This conservative, track-oriented boost mapping provides rock-solid performance lap after lap, and is combined with an advanced boost control strategy that allows for increased precision beyond factory limits. Going beyond power improvements, 034Motorsport’s calibrator also made improvements to the throttle mapping, increased idle stability, and enabled left-foot braking. The end result is a tune that drives as smoothly as a factory calibration, with power delivery that is consistent and manageable on the street and track alike.
The R410 Hybrid Turbocharger Upgrade Kit installs as a drop-in replacement for the factory parts, without requiring extensive modifications to other components. Every R410 Tuning Package includes a fully-loaded PL34 Handheld Flash-Loader that allows the end user to reflash between 91, 93 and 100 octane files.
Tuning Features:
Developed In-House on the Street, Track, and 034Motorsport's Chassis Dyno
Optimized Boost, Timing, and Fueling Maps for Increased Horsepower & Torque
Includes PL34 Handheld Flash-Loader with 91 Octane, 93 Octane, 100, and 104 Octane Tunes
Increased Rev Limiter to 7,100 RPM
Speed Limiter (Governor) Removed
Improved Throttle Response & Power Delivery
Refined Throttle Mapping for Part Throttle Drive-ability
Increased Idle Stability (Especially Helpful with Lightweight Flywheels!)
Hardware Features:
LOBA LO410-EA113DV Turbocharger
LOBA CNC-Machined Billet Compressor Wheel
5-Axis CNC Re-Profiled Compressor Housing & Backplate
5-Axis CNC-Machined Turbine Housing
Upgraded Thrust Bearing
Upgraded Wastegate Actuator
High-Precision Balancing
Made in Germany
Audi S3 FSI Fuel Injectors
155 bar HPFP PRV
3 bar MAP Sensor
PL34 Hand-Held Flash Loader
Installation Hardware Kit Included!
Compatible Vehicles:
2006 - 2008 Audi TT (8J)
2.0T FSI
2006 - 2008 Audi A3 (8P)
2.0T FSI
2006 - 2008 Volkswagen Eos / GLI / GTI (MkV)
2.0T FSI
Required Supporting Modifications:
High Pressure Fuel Pump (HPFP) Upgrade
Performance Downpipe Upgrade
Performance Intercooler Upgrade
Performance Air Intake
Recommended Supporting Modifications:
Low Pressure Fuel Pump (LPFP) Upgrade
Tune Installation:
Initial Installation: Flashed directly to your vehicle's ECU through the OBD-2 port using the included PL34 Handheld Flash-Loader.
Program Switching: Once the initial flash is performed, end users can flash between programs using the included PL34 Handheld Flash-Loader.
Peak Horsepower & Torque:
91 Octane - 334 Horsepower / 319 Foot-Pounds of Torque
93 Octane - 354 Horsepower / 347 Foot-Pounds of Torque
100 Octane - 376 Horsepower / 353 Foot-Pounds of Torque
Peak Horsepower & Torque Gains Under Curve:
91 Octane - 137 HP @ 6,400 RPM / 114 TQ @ 5,550 RPM
93 Octane - 155 HP @ 6,400 RPM / 136 TQ @ 4,600 RPM
100 Octane - 179 HP @ 6,400 RPM / 148 TQ @ 5,850 RPM
Call Us330-331-2003
ChatLive Chat
Company
About Us Careers Contact Us ECS Blog Become a Dealer Sponsorships / Partnerships
My Account
Sign In / Create Account My Wish Lists My Cart My Vehicles My Orders
Customer Service
Shipping Policy Payment Methods Returns and Warranty Policy Track Order Submit a Return Product Warranty Site Security
Vehicles
BMW Parts VW Parts Audi Parts Mercedes Parts Porsche Parts MINI Parts Supported Vehicles Wanted: Development Vehicles
Your Opinion Matters!
We invite you to share your shopping experiences with ECS so we can better meet your needs
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21721/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21720
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21720/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21720/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21720/events
|
https://github.com/huggingface/transformers/issues/21720
| 1,593,048,026
|
I_kwDOCUB6oc5e8_va
| 21,720
|
Multi-GPU inference using accelerate giving inaccurate/gibberish results on RTX 4090s
|
{
"login": "milsun",
"id": 35405363,
"node_id": "MDQ6VXNlcjM1NDA1MzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35405363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/milsun",
"html_url": "https://github.com/milsun",
"followers_url": "https://api.github.com/users/milsun/followers",
"following_url": "https://api.github.com/users/milsun/following{/other_user}",
"gists_url": "https://api.github.com/users/milsun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/milsun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/milsun/subscriptions",
"organizations_url": "https://api.github.com/users/milsun/orgs",
"repos_url": "https://api.github.com/users/milsun/repos",
"events_url": "https://api.github.com/users/milsun/events{/privacy}",
"received_events_url": "https://api.github.com/users/milsun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"If you use `bfloat16` as a torch dtype, do you get the same results? I'm wondering if somehow the ops in float16 are badly implemented on those GPUs.",
"Just tried, still outputs gibberish.\r\n\r\nprompt = \"What is the color of carrots?\\nAnswer:\"\r\noutput (bfloat16) = 'What is the color of carrots?\\nAnswer: \" \"'\r\n\r\nImportant thing to note here is, if model is loaded on just one gpu, it works fine.\r\nThat should help us narrow down the possible causes.",
"I have been reading up on this, seems to be a Nvidia driver issue, which is still unfixed. Basically, issue is with P2P memory access with the RTX 4090s. Seems like we are stuck with Nvidia to fix the issue.",
"Thanks for investigating. Do you have a link you could share so that others reading the issue later on can have all the info?",
"https://forums.developer.nvidia.com/t/standard-nvidia-cuda-tests-fail-with-dual-rtx-4090-linux-box",
"I tried running the same script in Windows, works fine. So, the conclusion is, we need to wait for Nvidia to update the drivers for Linux. Closing the issue. Thanks for your time @sgugger ."
] | 1,676
| 1,677
| 1,677
|
NONE
| null |
### System Info
I am trying to use pretrained opt-6.7b model for inference, by using "device_map" as "auto", "balanced", basically scenarios where model weights are spread across both GPUs; the results produced are inaccurate and gibberish. Although if model is loaded on just one GPU, it works fine.
Tried opt-13b and similarly sized other models spread across both GPUs, as well, but all of them produce gibberish results.
Also, tried the same with different GPUs, 2x RTX 3090, 2x A5000; works fine with these.
Basically, only when a model is spread across multiple RTX 4090s, the results are gibberish.
Specs:
python=3.9
transformers==4.26.1
accelerate==0.16.0
cuda==11.6
Code:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False)
prompt = "What is the color of carrots?\nAnswer:"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
generated_ids = model.generate(input_ids)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['What is the color of carrots?\nAnswer: toinhoza, and the other half of']
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps:
1. Use 2 x RTX 4090 node
2. Install transformers, accelerate
3. Use above code to load and infer using opt model, load pretrained models, such as opt-6.7b/opt-13b
4. Set device_map as "auto" or "balanced"
5. Run inference
### Expected behavior
With prompt = "What is the color of carrots?\nAnswer:"
Result should be "What is the color of carrots?\nAnswer: Orange"
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21720/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21719
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21719/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21719/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21719/events
|
https://github.com/huggingface/transformers/issues/21719
| 1,592,912,092
|
I_kwDOCUB6oc5e8ejc
| 21,719
|
fsmt translation with use_cache=True bug
|
{
"login": "lihaoxin2020",
"id": 77715908,
"node_id": "MDQ6VXNlcjc3NzE1OTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/77715908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lihaoxin2020",
"html_url": "https://github.com/lihaoxin2020",
"followers_url": "https://api.github.com/users/lihaoxin2020/followers",
"following_url": "https://api.github.com/users/lihaoxin2020/following{/other_user}",
"gists_url": "https://api.github.com/users/lihaoxin2020/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lihaoxin2020/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lihaoxin2020/subscriptions",
"organizations_url": "https://api.github.com/users/lihaoxin2020/orgs",
"repos_url": "https://api.github.com/users/lihaoxin2020/repos",
"events_url": "https://api.github.com/users/lihaoxin2020/events{/privacy}",
"received_events_url": "https://api.github.com/users/lihaoxin2020/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @lihaoxin2020 \r\nThanks for the issue! \r\nI tried to run the script with the command:\r\n```bash\r\npython -m torch.distributed.launch --nproc_per_node 2 run_translation.py run_translation_config.json \r\n```\r\nand getting:\r\n```\r\nrun_translation.py: error: the following arguments are required: --model_name_or_path, --output_dir\r\n```\r\nIf I put the correct values for those flags I get `ValueError: Need either a dataset name or a training/validation file.`\r\n\r\nI will try to see if I can reproduce locally without having to run this script",
"Based on your traceback, here is a script I made to try to reproduce the issue, \r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForSeq2SeqLM\r\nfrom transformers.trainer_pt_utils import LabelSmoother\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"allenai/wmt19-de-en-6-6-base\")\r\ndummy_input = torch.LongTensor([[0, 1, 1, 2, 3]])\r\nlabels = torch.LongTensor([[0, 1, 1, 2, 3]])\r\n\r\noutputs = model(input_ids=dummy_input, use_cache=True)\r\n\r\nlabel_smoother = LabelSmoother()\r\nloss = label_smoother(outputs, labels, shift_labels=True)\r\n```\r\nDoes this script works for you? ",
"> Based on your traceback, here is a script I made to try to reproduce the issue,\r\n> \r\n> ```python\r\n> import torch\r\n> from transformers import AutoModelForSeq2SeqLM\r\n> from transformers.trainer_pt_utils import LabelSmoother\r\n> \r\n> model = AutoModelForSeq2SeqLM.from_pretrained(\"allenai/wmt19-de-en-6-6-base\")\r\n> dummy_input = torch.LongTensor([[0, 1, 1, 2, 3]])\r\n> labels = torch.LongTensor([[0, 1, 1, 2, 3]])\r\n> \r\n> outputs = model(input_ids=dummy_input, use_cache=True)\r\n> \r\n> label_smoother = LabelSmoother()\r\n> loss = label_smoother(outputs, labels, shift_labels=True)\r\n> ```\r\n> \r\n> Does this script works for you?\r\n\r\nHi @younesbelkada ! For this script, I think you need to try `outputs = model(input_ids=dummy_input, decoder_input_ids=labels, use_cache=True)` to make it equivalent to the context I referred to. \r\n\r\nI got the same error with the new `outputs` line:\r\n```\r\nTraceback (most recent call last):\r\n File \"./playground.py\", line 16, in <module>\r\n loss = label_smoother(outputs, labels, shift_labels=True)\r\n File \"/mmfs1/home/lihaoxin/workspace/lihaoxin/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer_pt_utils.py\", line 498, in __call__\r\n nll_loss = log_probs.gather(dim=-1, index=labels)\r\nRuntimeError: Size does not match at dimension 1 expected index [1, 4, 1] to be smaller than self [1, 0, 43536] apart from dimension 2\r\n```\r\n",
"Hi @lihaoxin2020 \r\nI managed to reproduce the issue, thanks a lot! \r\nAs a quick fix I propose you to not use `use_cache` for now while we investigate what is happening ! Thanks!",
"I also had the same issue (albeit with a custom training script), here's what I think is happening:\r\n\r\nIn the run_translation config you've set `label_smoothing_factor` to greater than 0. As a result, the `labels` field is removed from the model forward call on the [Trainer file](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py) when computing loss:\r\n```\r\nif self.label_smoother is not None and \"labels\" in inputs:\r\n labels = inputs.pop(\"labels\")\r\n```\r\n\r\nThis is done because if labels is provided to the FSMT model when called, the loss will be calculated twice. (Once in the FSMT model call, and once in the label smoothing class).\r\n\r\nIn [modeling_fsmt](https://github.com/huggingface/transformers/blob/main/src/transformers/models/fsmt/modeling_fsmt.py), we can see that use_cache is only implicitly disabled when labels is provided, which is not the case with label smoothing enabled:\r\n```\r\nif labels is not None:\r\n use_cache = False\r\n```\r\n\r\nIn FSMTDecoder, we can see that when use_cache is enabled, even during training, it will slice off all of the input IDs except for the last whenever called:\r\n```\r\nif use_cache:\r\n input_ids = input_ids[:, -1:]\r\n positions = positions[:, -1:] # happens after we embed them\r\n```\r\n\r\nI don't know how BART handles it, but in [Marian](https://github.com/huggingface/transformers/blob/main/src/transformers/models/marian/modeling_marian.py) (which copies a lot of BART code), they only slice off the input_ids when preparing inputs for generation (past_key_values is probably only passed when use_cache is on):\r\n```\r\ndef prepare_inputs_for_generation(\r\n self, input_ids, past_key_values=None, attention_mask=None, use_cache=None, **kwargs\r\n):\r\n # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly\r\n if attention_mask is None:\r\n attention_mask = input_ids.new_ones(input_ids.shape)\r\n\r\n if past_key_values:\r\n input_ids = input_ids[:, -1:]\r\n```\r\n\r\n[younesbelkada](https://github.com/younesbelkada)'s solution definitely works for now.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
### System Info
I'm running [run_translation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation) to train from scratch with fsmt artifacts.
I'm using
- `transformers` version: 4.27.0.dev0
- Platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (True)
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Detailed configuration `run_translation_config.json` as follow:
```json
{
"config_name": "allenai/wmt19-de-en-6-6-base",
"tokenizer_name": "allenai/wmt16-en-de-12-1",
"use_fast_tokenizer": false,
"dataset_name": "wmt16",
"dataset_config_name": "de-en",
"source_lang": "en",
"target_lang": "de",
"max_source_length": 512,
"max_target_length": 512,
"generation_max_length": 512,
"preprocessing_num_workers": 32,
"output_dir": "./tmp/transformer_base_wmt16_6_6_en-de_fsmt",
"run_name": "transformer_base_wmt16_6_6_en-de_fsmt",
"deepspeed": "./ds_stage2_config.json", (NOT important)
"sortish_sampler": true,
"overwrite_output_dir": true,
"do_train": true,
"do_eval": true,
"do_predict": false,
"evaluation_strategy": "steps",
"eval_steps": 2000,
"predict_with_generate": true,
"load_best_model_at_end": true,
"logging_strategy": "steps",
"logging_steps": 1000,
"logging_first_step" :true,
"eval_accumulation_steps": 16,
"dataloader_num_workers": 32,
"per_device_train_batch_size": 64,
"per_device_eval_batch_size": 64,
"gradient_accumulation_steps": 4,
"fp16": true,
"adam_beta1": 0.9,
"adam_beta2":0.998,
"adam_epsilon": 1e-08,
"learning_rate": 5e-4,
"weight_decay": 0.01,
"label_smoothing_factor": 0.1,
"lr_scheduler_type": "linear",
"warmup_ratio": 0.04,
"num_train_epochs": 25,
"save_strategy": "steps",
"save_steps": 2000,
"save_total_limit": 10,
"seed": 42
}
```
2. run command
```shell
python -m torch.distributed.launch --nproc_per_node 8 \
run_translation.py \
run_translation_config.json
```
### Expected behavior
```
Traceback (most recent call last):
File "run_translation.py", line 675, in <module>
main()
File "run_translation.py", line 592, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/ace14487oj/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer.py", line 1570, in train
return inner_training_loop(
File "/home/ace14487oj/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer.py", line 1835, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/ace14487oj/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer.py", line 2583, in training_step
loss = self.compute_loss(model, inputs)
File "/home/ace14487oj/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer.py", line 2625, in compute_loss
loss = self.label_smoother(outputs, labels)
File "/home/ace14487oj/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 498, in __call__
nll_loss = log_probs.gather(dim=-1, index=labels)
RuntimeError: Size does not match at dimension 1 expected index [64, 104, 1] to be smaller than self [64, 1, 43536] apart from dimension 2
```
After some investigation, I found the error was about use_cache implementation and only happens if I leave use_cache = true in fsmt_model_config. When I tried use_cache = false, the error disappear. I also tried to train BART with run_translation.py on exactly same parameters (use_cache = true), and similar issue does not happen on BART.
Has anybody run into this issue as well or otherwise maybe confirm this is a bug?
Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21719/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21718
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21718/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21718/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21718/events
|
https://github.com/huggingface/transformers/pull/21718
| 1,592,746,435
|
PR_kwDOCUB6oc5KZCxv
| 21,718
|
Fixed a bug in remove_handler function
|
{
"login": "bahgat-ahmed",
"id": 20663285,
"node_id": "MDQ6VXNlcjIwNjYzMjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/20663285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bahgat-ahmed",
"html_url": "https://github.com/bahgat-ahmed",
"followers_url": "https://api.github.com/users/bahgat-ahmed/followers",
"following_url": "https://api.github.com/users/bahgat-ahmed/following{/other_user}",
"gists_url": "https://api.github.com/users/bahgat-ahmed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bahgat-ahmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bahgat-ahmed/subscriptions",
"organizations_url": "https://api.github.com/users/bahgat-ahmed/orgs",
"repos_url": "https://api.github.com/users/bahgat-ahmed/repos",
"events_url": "https://api.github.com/users/bahgat-ahmed/events{/privacy}",
"received_events_url": "https://api.github.com/users/bahgat-ahmed/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21718). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
Fixed the bug mentioned in this issue: #21506
And replaced `Assertion`s with `ValueError`s only in that same function that contained the bug.
@LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21718/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21718",
"html_url": "https://github.com/huggingface/transformers/pull/21718",
"diff_url": "https://github.com/huggingface/transformers/pull/21718.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21718.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21717
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21717/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21717/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21717/events
|
https://github.com/huggingface/transformers/issues/21717
| 1,592,735,074
|
I_kwDOCUB6oc5e7zVi
| 21,717
|
Does generate() support "Export to TorchScript"
|
{
"login": "gongel",
"id": 24390500,
"node_id": "MDQ6VXNlcjI0MzkwNTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/24390500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gongel",
"html_url": "https://github.com/gongel",
"followers_url": "https://api.github.com/users/gongel/followers",
"following_url": "https://api.github.com/users/gongel/following{/other_user}",
"gists_url": "https://api.github.com/users/gongel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gongel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gongel/subscriptions",
"organizations_url": "https://api.github.com/users/gongel/orgs",
"repos_url": "https://api.github.com/users/gongel/repos",
"events_url": "https://api.github.com/users/gongel/events{/privacy}",
"received_events_url": "https://api.github.com/users/gongel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante\r\n",
"Hey @gongel 👋 The model foward pass probably can be serialized, but the full `model.generate` cannot. We are working on the serialization of `model.generate` as we speak, in the context of PyTorch dynamo.\r\n\r\nCan I be of further assistance? :)",
"Thank you, I'm looking forward to it",
"Is there an issue tracking this? What's the status of `TorchScript`ing `model.generate`? Or can `generate` be a standalone function and call `TorchScript` module?",
"@ZisIsNotZis AFAIK no tracking issue. We are exploring generation speedups (which will likely include static shapes, i.e. should be TorchScript compilable) at the moment, to be released in ~3 months.",
"Is there any update here, I also met the same problem when i export model with model generate to TorchScript, is this support now? ",
"I think that #27931 might help for this and will kickstart the effort to support compile so a bit related (functional logits processors etc)"
] | 1,676
| 1,706
| 1,676
|
NONE
| null |
### Feature request
Does T5 `model.generate` support "Export to TorchScript"
https://huggingface.co/docs/transformers/main/en/torchscript
I want to make a faster deployment and transform `model.generate` like this:
https://discuss.huggingface.co/t/t5-base-not-torchscriptable/11173
thanks all
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21717/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21717/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21716
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21716/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21716/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21716/events
|
https://github.com/huggingface/transformers/issues/21716
| 1,592,720,701
|
I_kwDOCUB6oc5e7v09
| 21,716
|
run_mlm.py shows error
|
{
"login": "dykim3",
"id": 100189969,
"node_id": "U_kgDOBfjHEQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100189969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dykim3",
"html_url": "https://github.com/dykim3",
"followers_url": "https://api.github.com/users/dykim3/followers",
"following_url": "https://api.github.com/users/dykim3/following{/other_user}",
"gists_url": "https://api.github.com/users/dykim3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dykim3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dykim3/subscriptions",
"organizations_url": "https://api.github.com/users/dykim3/orgs",
"repos_url": "https://api.github.com/users/dykim3/repos",
"events_url": "https://api.github.com/users/dykim3/events{/privacy}",
"received_events_url": "https://api.github.com/users/dykim3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Could you please provide the result of `transformers-cli env` as instructed in the template?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
### System Info
Hi.
I'm training bert model with mlm with following command. It seems that the values in attention_mask, token_type_id gets invalid
```
TOKENIZERS_PARALLELISM=false \
NCCL_P2P_DISABLE=1 python3 run_mlm.py \
--model_name_or_path "kykim/bert-kor-base" \
--tokenizer_name "kykim/bert-kor-base" \
--train_file /mnt/STT_lm/korea_addr_50000_numtotext.txt \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--do_train \
--do_eval \
--output_dir ./snapshots/test-mlm-50000 \
--overwrite_output_dir \
--dataloader_num_workers 8 \
--max_seq_length 200 #\
# --line_by_line
```
after few batches it throws this..
```
[INFO|modeling_bert.py:1370] 2023-02-21 03:18:09,285 >> BertForMaskedLM
attention_mask tensor([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], device='cuda:1')
token_type_ids tensor([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], device='cuda:1')
position_ids None
head_mask None
inputs_embeds None
encoder_hidden_states None
encoder_attention_mask None
output_attentions None
output_hidden_states None
return_dict True
[INFO|modeling_bert.py:1388] 2023-02-21 03:18:09,295 >>
prediction_scores: tensor([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], device='cuda:1',
grad_fn=<ViewBackward0>)
labels: tensor([0, 0, 0, ..., 0, 0, 0], device='cuda:1')
[INFO|modeling_bert.py:1370] 2023-02-21 03:18:09,305 >> BertForMaskedLM
attention_mask tensor([[ 0, 0, 0, ..., 0,
0, 0],
[ 0, 0, 0, ..., 0,
0, 0],
[ 0, 0, 0, ..., 0,
0, 0],
...,
[ 0, 0, 0, ..., 0,
0, 0],
[ 0, 0, 0, ..., 0,
0, 0],
[ 0, 0, 0, ..., 0,
0, 0]], device='cuda:3')
token_type_ids tensor([[139726224884016, 139726226097792, 3254755329, ..., 0,
0, 139726226464001],
[ 0, 0, 0, ..., 0,
0, 0],
[ 0, 0, 0, ..., 0,
0, 0],
...,
[ 0, 0, 0, ..., 0,
0, 0],
[ 0, 0, 0, ..., 139726226538304,
106848880, 139761598107536],
[139726224884032, 139726226098720, 1, ..., 0,
0, 0]], device='cuda:3')
position_ids None
head_mask None
inputs_embeds None
encoder_hidden_states None
encoder_attention_mask None
output_attentions None
output_hidden_states None
return_dict True
[INFO|modeling_bert.py:1370] 2023-02-21 03:18:09,306 >> BertForMaskedLM
attention_mask tensor([[139726092393712, 0, 139726092318512, ..., 139726092393408, 0,
139726092318512],
[139726092393104, 0, 139726092318512, ..., 139726092392800, 0,
139726092318512],
[139726092392496, 0, 139726092318512, ..., 139726092392192, 0,
139726092318512],
...,
[139726092391888, 0, 139726092318512, ..., 139726092391584, 0,
139726092318512],
[139726092328960, 0, 139726092318512, ..., 139726092328656, 0,
139726092318512],
[139726092328352, 0, 139726092318512, ..., 139726092328048, 0,
139726092318512]], device='cuda:2')
token_type_ids tensor([[ 0, 0, 0, ..., 139726092397664, 0,
139726092324144],
[139726092397360, 0, 139726092324144, ..., 139726092397056, 0,
139726092324144],
[139726092396752, 0, 139726092324144, ..., 139726092329088, 0,
139726092324144],
...,
[139726092328784, 0, 139726092324144, ..., 139726092328480, 0,
139726092324144],
[139726092328176, 0, 139726092324144, ..., 139726092324144, 106848880,
139761598047072],
[139726090666304, 139726092316064, 1, ..., 0, 0,
0]], device='cuda:2')
position_ids None
head_mask None
inputs_embeds None
encoder_hidden_states None
encoder_attention_mask None
output_attentions None
output_hidden_states None
return_dict True
[INFO|modeling_bert.py:1370] 2023-02-21 03:18:09,306 >> BertForMaskedLM
attention_mask tensor([[139755215913280, 139755216000544, 1, ..., 0, 0, 0],
[139755215913280, 139755216808704, 1, ..., 139755216809376, 106848880, 139761598326640],
[139755215913280, 139755216807568, 1, ..., 139755216803312, 0, 139755216809376],
...,
[139755215913280, 139755216806672, 1, ..., 139755216797312, 0, 139755216809376],
[139755216804336, 0, 139755216809376, ..., 139755215913280, 139755215913280, 1],
[139755215913376, 139755216824784, 139764707470048, ..., 139762620122257, 0, 0]],
device='cuda:0')
token_type_ids tensor([[139761598326800, 0, 0, ..., 139755216815408, 139755215913088, 64],
[139755216806224, 139755215913088, 64, ..., 139755216807216, 139755215913088, 64],
[139755215913280, 139755216814896, 32, ..., 139755215913280, 139755216819504, 1],
...,
[139755215913280, 139755215913280, 1, ..., 139755215913328, 139755215913328, 32],
[139755215913376, 139755216830560, 139764707470048, ..., 0, 0, 0],
[ 0, 0, 0, ..., 139755215913232, 139755215913232, 0]],
device='cuda:0')
position_ids None
head_mask None
inputs_embeds None
encoder_hidden_states None
encoder_attention_mask None
output_attentions None
output_hidden_states None
return_dict True
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "run_mlm.py", line 645, in <module>
main()
File "run_mlm.py", line 594, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/transformers/src/transformers/trainer.py", line 1576, in train
return inner_training_loop(
File "/transformers/src/transformers/trainer.py", line 1843, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/transformers/src/transformers/trainer.py", line 2588, in training_step
loss = self.compute_loss(model, inputs)
File "/transformers/src/transformers/trainer.py", line 2620, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py", line 171, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py", line 181, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py", line 89, in parallel_apply
output.reraise()
File "/usr/local/lib/python3.8/dist-packages/torch/_utils.py", line 543, in reraise
raise exception
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/transformers/src/transformers/models/bert/modeling_bert.py", line 1384, in forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/transformers/src/transformers/models/bert/modeling_bert.py", line 708, in forward
prediction_scores = self.predictions(sequence_output)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/transformers/src/transformers/models/bert/modeling_bert.py", line 697, in forward
hidden_states = self.transform(hidden_states)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/transformers/src/transformers/models/bert/modeling_bert.py", line 676, in forward
hidden_states = self.dense(hidden_states)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasLtMatmul( ltHandle, computeDesc.descriptor(), &alpha_val, mat1_ptr, Adesc.descriptor(), mat2_ptr, Bdesc.descriptor(), &beta_val, result_ptr, Cdesc.descriptor(), result_ptr, Cdesc.descriptor(), &heuristicResult.algo, workspace.data_ptr(), workspaceSize, at::cuda::getCurrentCUDAStream())`
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
TOKENIZERS_PARALLELISM=false \
NCCL_P2P_DISABLE=1 python3 run_mlm.py \
--model_name_or_path "kykim/bert-kor-base" \
--tokenizer_name "kykim/bert-kor-base" \
--train_file /mnt/STT_lm/korea_addr_50000_numtotext.txt \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--do_train \
--do_eval \
--output_dir ./snapshots/test-mlm-50000 \
--overwrite_output_dir \
--dataloader_num_workers 8 \
--max_seq_length 200 #\
# --line_by_line
### Expected behavior
.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21716/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21715
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21715/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21715/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21715/events
|
https://github.com/huggingface/transformers/issues/21715
| 1,592,719,041
|
I_kwDOCUB6oc5e7vbB
| 21,715
|
Make `CLIPImageProcessor` compatible with `subfolder` kwarg
|
{
"login": "Abhinay1997",
"id": 24771261,
"node_id": "MDQ6VXNlcjI0NzcxMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/24771261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abhinay1997",
"html_url": "https://github.com/Abhinay1997",
"followers_url": "https://api.github.com/users/Abhinay1997/followers",
"following_url": "https://api.github.com/users/Abhinay1997/following{/other_user}",
"gists_url": "https://api.github.com/users/Abhinay1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abhinay1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abhinay1997/subscriptions",
"organizations_url": "https://api.github.com/users/Abhinay1997/orgs",
"repos_url": "https://api.github.com/users/Abhinay1997/repos",
"events_url": "https://api.github.com/users/Abhinay1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abhinay1997/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for flagging! We'll be happy to have a look at a PR!",
"@sgugger, I created the PR. #21725 ",
"Closing the issue as PR is merged and issue is resolved. :)"
] | 1,676
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### Feature request
The CLIPImageProcessor can't be initialized currently using:
```
CLIPImageProcessor.from_pretrained('kakaobrain/karlo-v1-alpha-image-variations', subfolder = 'feature_extractor', return_unused_kwargs=True)
```
as the `ImageProcessorMixin.get_image_processor_dict` method doesn't take in a `subfolder` kwarg to pass into `cached_file`
Link:
https://github.com/huggingface/transformers/blob/main/src/transformers/image_processing_utils.py#L214
Can we add the subfolder kwarg to be able to initialize a CLIPImageProcessor this way ?
### Motivation
Currently, other modules like the `CLIPTextModelWithProjection` are able to load from subfolders. It'd be nice to have CLIPImageProcessor also behave the same way.
### Your contribution
I'd be happy to open a PR on this. I was able to get it to work by passing `subfolder` directly to `cached_file` but need to verify a few more things.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21715/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21714
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21714/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21714/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21714/events
|
https://github.com/huggingface/transformers/issues/21714
| 1,592,669,353
|
I_kwDOCUB6oc5e7jSp
| 21,714
|
Batch inference of GIT model
|
{
"login": "JosephChenHub",
"id": 22444830,
"node_id": "MDQ6VXNlcjIyNDQ0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/22444830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JosephChenHub",
"html_url": "https://github.com/JosephChenHub",
"followers_url": "https://api.github.com/users/JosephChenHub/followers",
"following_url": "https://api.github.com/users/JosephChenHub/following{/other_user}",
"gists_url": "https://api.github.com/users/JosephChenHub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JosephChenHub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JosephChenHub/subscriptions",
"organizations_url": "https://api.github.com/users/JosephChenHub/orgs",
"repos_url": "https://api.github.com/users/JosephChenHub/repos",
"events_url": "https://api.github.com/users/JosephChenHub/events{/privacy}",
"received_events_url": "https://api.github.com/users/JosephChenHub/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @amyeroberts and @gante ",
"Hi @JosephChenHub 👋 We have been working on similar problems since the release of v4.26. Can you please confirm that the problem still exists in `main`?\r\n\r\n(`pip install --upgrade git+https://github.com/huggingface/transformers.git`)",
"> pip install --upgrade git+https://github.com/huggingface/transformers.git\r\n\r\nHi @gante, I use this command \r\n\r\n> Hi @JosephChenHub 👋 We have been working on similar problems since the release of v4.26. Can you please confirm that the problem still exists in `main`?\r\n> \r\n> (`pip install --upgrade git+https://github.com/huggingface/transformers.git`)\r\n\r\nHi @gante , I use the source code of `main` branch (transformers==4.27.0.dev0), the issue still exists. ",
"Hey @JosephChenHub 👋 Thank you for your confirmation :) \r\n\r\nI was able to track down the root cause -- #21738 fixes it! After it is merged, you can install from `main` again and it should work 🚀 ",
"(@JosephChenHub It should be working now, let me know if it is not!)"
] | 1,676
| 1,677
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.27
- Python version: 3.8.0
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.9.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: No
When I use GIT model to do image caption, it throws an exception :
```
File "/usr/local/lib/python3.8/dist-packages/transformers/models/git/modeling_git.py", line 1272, in forward
hidden_states = torch.cat((projected_visual_features, embedding_output), dim=1)
RuntimeError: Sizes of tensors must match except in dimension 0. Got 1 and 0 (The offending index is 0)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
to reproduce this error, use the following core code:
```
processor = AutoProcessor.from_pretrained("microsoft/git-base-coco")
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-coco").to(device)
model.eval()
...
pixel_values = processor(images=images, return_tensors="pt").pixel_values.to(device)
print(pixel_values) # (10, 3, 224, 224)
output_ids = model.generate(pixel_values=pixel_values, max_length=50)
preds = processor.batch_decode(output_ids, skip_special_tokens=True)[0]
```
I have checked the source file "modeling_git.py" line 1272,

due to **embedding_output.size(0)** is 1 but visual features size(0) is 10, 1//10 = 0,

so these two features cannot be concatenated.
### Expected behavior
support batch input
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21714/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21713
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21713/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21713/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21713/events
|
https://github.com/huggingface/transformers/issues/21713
| 1,592,609,835
|
I_kwDOCUB6oc5e7Uwr
| 21,713
|
Unable to use BLIP2 with caption_coco_opt6.7b at HEAD via salesforce-lavis (also HEAD)
|
{
"login": "AstraliteHeart",
"id": 81396681,
"node_id": "MDQ6VXNlcjgxMzk2Njgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81396681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AstraliteHeart",
"html_url": "https://github.com/AstraliteHeart",
"followers_url": "https://api.github.com/users/AstraliteHeart/followers",
"following_url": "https://api.github.com/users/AstraliteHeart/following{/other_user}",
"gists_url": "https://api.github.com/users/AstraliteHeart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AstraliteHeart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AstraliteHeart/subscriptions",
"organizations_url": "https://api.github.com/users/AstraliteHeart/orgs",
"repos_url": "https://api.github.com/users/AstraliteHeart/repos",
"events_url": "https://api.github.com/users/AstraliteHeart/events{/privacy}",
"received_events_url": "https://api.github.com/users/AstraliteHeart/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @younesbelkada ",
"Hey @AstraliteHeart 👋 This issue seems to be a duplicate of https://github.com/huggingface/transformers/issues/21599, which is fixed.\r\n\r\nCan I ask you to try to run your script using `transformers` `main` branch, i.e. after installing with `pip install --upgrade git+https://github.com/huggingface/transformers.git`?",
"I don't think this is a duplicate, my env is past that fix (see p4 in the original repro steps), I've updated form `main` to confirm as follows:\r\n\r\n1. `pip install --upgrade git+https://github.com/huggingface/transformers.git`\r\n2. `Resolved https://github.com/huggingface/transformers.git to commit bb5a2f2fc30985841289207b9f1f7765d8abc4e0`\r\n3. `python test_simple.py`\r\n4. `RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 25 but got size 5 for tensor number 1 in the list.`",
"Thank you for confirming @AstraliteHeart 🤗 I will dig deeper and let you know what I find!",
"After some digging, we can see that the exception is raised as follows:\r\n\r\n```py\r\n│ /home/joao/hf/lib/python3.10/site-packages/lavis/models/blip2_models/modeling_opt.py:703 in │\r\n│ forward │\r\n│ │\r\n│ 700 │ │ │ inputs_embeds = self.embed_tokens(input_ids) │\r\n│ 701 │ │ │\r\n│ 702 │ │ if query_embeds is not None: │\r\n│ ❱ 703 │ │ │ inputs_embeds = torch.cat([query_embeds, inputs_embeds], dim=1) │\r\n│ 704 │ │ │ input_shape = inputs_embeds.size()[:-1] │\r\n│ 705 │ │ │\r\n│ 706 │ │ # embed positions │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nRuntimeError: Sizes of tensors must match except in dimension 1. Expected size 25 but got size 5 for tensor number 1 in the list.\r\n```\r\n\r\nFrom the full stack trace, we can conclude that the error arises from an issue in `lavis`, and not in `transformers` :) Actually, the root cause for this issue is something that we have addressed [on this PR](https://github.com/huggingface/transformers/pull/21405) -- `lavis` has a different implementation, where they have a modified OPT model to handle the image embeddings, where we decided to update `.generate()` to handle soft-prompting.\r\n\r\n@AstraliteHeart This means you have two options:\r\n1. Update your code to rely on `transformers`, as opposed to `lavis`. See [here](https://huggingface.co/docs/transformers/main/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example) for examples.\r\n2. Open an issue in `lavis`, so they can help you with this issue :) ",
"@gante thank you for debugging!\r\n\r\nI can confirm that syncing before https://github.com/huggingface/transformers/pull/21405 (edc1e734bfc01109b8c66881d950ebbda032a6d2) works, I'll open an issue on SF side to warn them about the breakage, unfortunately this brings me to the original issue of trying to use `convert_blip_2_original_to_pytorch.py`, perhaps you can help me figure out how the BLIP2 models were converted? (I understand, this is irrelevant to most users but only a few brave souls who are finetuning BLIP2 via LAVIS but want to then load it in HF.)\r\n\r\nI've tried both `pip install git+https://github.com/nielsrogge/LAVIS.git@fix_lavis` (mentioned in the script) and `lavis` from HEAD, but I am getting this trace\r\n\r\n```\r\n$ python ./convert_blip_2_original_to_pytorch.py\r\nLoading original model...\r\nPosition interpolate from 16x16 to 26x26\r\ntokenizer facebook/opt-6.7b\r\nLoading checkpoint shards: Done!\r\nTraceback (most recent call last):\r\n File \"./convert_blip_2_original_to_pytorch.py\", line 304, in <module>\r\n convert_blip2_checkpoint(args.model_name, args.pytorch_dump_folder_path, args.push_to_hub)\r\n File \"/.../envs/lavis/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"./convert_blip_2_original_to_pytorch.py\", line 216, in convert_blip2_checkpoint\r\n original_logits = original_logits.logits\r\nAttributeError: 'dict' object has no attribute 'logits' // indeed, this is a dictionary containing only 'loss'\r\n```\r\n\r\nwhat combination of versions of `transformers` and `lavis` was used during conversion?\r\n\r\n\r\n",
"Hi,\r\n\r\nThanks for converting BLIP2 to HF :) I actually forked the LAVIS repo and made some tweaks to facilitate conversion (I removed a bunch of unnecessary requirements etc). See [here](https://github.com/huggingface/transformers/blob/4446b6b094a7c036d09059885bec679279c9b488/src/transformers/models/blip_2/convert_blip_2_original_to_pytorch.py#L27). ",
"Hi Niels, thank you for checking this.\r\n\r\nI did use your fork (or so I thought, sigh), but I redid everything from scratch while comparing traces with code and, well... turned out I moved my blip2 conversion script to LAVIS git root folder which kept including their model (as it's in the `lavis` folder) even with your fixed one being installed (so I do apologies). \r\n\r\nI can now confirm that with your fork I was able to convert my model with snapshot before https://github.com/huggingface/transformers/pull/21405 and load it it in 8 bits with latest `bitsandbytes` keeping VRAM usage at 11.1GB (vs around 18.5GB without).\r\n\r\nDo you have any guidance on matching outputs between lavis and hf models? I ran about 50 samples though lavis/hf16/hf8 and while hf16 and hf8 are mostly consistent (good), lavis output is better in all cases. (see anecdotal examples below)\r\n\r\nHere is roughly how I load and run all models (https://gist.github.com/AstraliteHeart/4d7ebf834021b8e1c9bc439c1633002c) I tried to make sure all settings and rnd seeds are matching, but perhaps I am missing something?\r\n\r\nhttps://derpicdn.net/img/view/2023/2/23/3051871.png\r\n```\r\n'caption_lavis': ['scootaloo, apple bloom, and applejack in a group hug scootaloo, apple bloom, and applejack are all smiling white background', 'scootaloo, applebloom, and applejack in a group hug scootaloo and applebloom are jumping applejack is smiling white background', 'scootaloo, apple bloom, and applejack in a group hug scootaloo, apple bloom, and applejack are jumping and smiling white background'],\r\n'caption_hf_16': ['a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, rarity, rarity, rarity, rarity, rarity, rarity', 'a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, twilight sparkle, twilight sparkle, twilight sparkle', 'a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, twilight sparkle, rarity, rarity, rarity, rarity'],\r\n'caption_hf_8': ['a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, pinkie pie, twilight sparkle, rarity, pink', 'a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, pinkie pie, twilight sparkle, twilight sparkle', 'a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, twilight sparkle, twilight sparkle, twilight sparkle']\r\n```\r\n\r\nhttps://derpicdn.net/img/2017/7/7/1480500/large.png\r\n```\r\n'caption_lavis': ['alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she has a book on her head and a book on her chest she is', 'alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she is also wearing a book on her head and a book on her chest', 'alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she has a book on her head and a book on her chest she has'],\r\n'caption_hf_16': ['posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by', 'posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books\\n', 'posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by books, surrounded by books, surrounded by books, surrounded by books, surrounded by books, surrounded by books'],\r\n'caption_hf_8': ['twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she','twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on top of her she is surrounded by a pile of books on top of her she is surrounded by a pile of books on top', 'twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor her']\r\n```",
"Thanks for reporting, that should not be the case! I extensively tested the greedy/beam search outputs on original vs my implementation to make sure everything works as expected.\r\n\r\nBut the generate method has had some updates now so there might be a small issue. However isn't it weird that the first token is already different? cc'ing @gante here",
"Also I'm not sure you can run both LAVIS and Transformers main branch in the same environment to compare, cause LAVIS relies on an older version of Transformers",
"Results on top are from `transformers` https://gist.github.com/AstraliteHeart/4d7ebf834021b8e1c9bc439c1633002c + your fork of `lavis`.\r\n\r\nSome more tests (tldr, latest transformers still do not produce the same output)\r\n\r\nOfficial `lavis` repo:\r\n```\r\n['scootaloo, apple bloom, and applejack in a group hug scootaloo, apple bloom, and applejack are all smiling white background', 'scootaloo, applebloom, and applejack in a group hug scootaloo and applebloom are jumping applejack is smiling white background', 'scootaloo, apple bloom, and applejack in a group hug scootaloo, apple bloom, and applejack are jumping and smiling white background']\r\n```\r\n```\r\n['alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she has a book on her head and a book on her chest she is', 'alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she is also wearing a book on her head and a book on her chest', 'alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she has a book on her head and a book on her chest she has']\r\n```\r\n\r\nLatest transformers:\r\n```\r\n'caption_hf_16': [\r\n 'a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, rarity, rarity, rarity, rarity, rarity, rarity',\r\n 'a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, twilight sparkle, twilight sparkle, twilight sparkle',\r\n 'a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, twilight sparkle, rarity, rarity, rarity, rarity'\r\n],\r\n'caption_hf_8': [\r\n 'a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, pinkie pie, twilight sparkle, rarity, pink',\r\n 'a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, pinkie pie, twilight sparkle, twilight sparkle',\r\n 'a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, twilight sparkle, twilight sparkle, twilight sparkle'\r\n]\r\n```\r\n\r\n\r\n```\r\ncaption_hf_16': [\r\n 'posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by',\r\n 'posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books\\n',\r\n 'posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by books, surrounded by books, surrounded by books, surrounded by books, surrounded by books, surrounded by books'\r\n],\r\n'caption_hf_8': [\r\n 'twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she',\r\n 'twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on top of her she is surrounded by a pile of books on top of her she is surrounded by a pile of books on top',\r\n 'twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor her'\r\n]\r\n```",
"Hey @AstraliteHeart 👋 Differences in generation can be explained by many parts of the stack, from ninja numerical bugs to intentional implementation quirks. Debugging the exact cause takes time, so I want to ask for your help :D \r\n\r\n1. Can you confirm that both `lavis` and `transformers` are recent versions? (latest release or newer)\r\n2. Comparing results with sampling is impossible, as minor changes like the order of operations will produce different results. Have you confirmed that the results are different without sampling? (you can ensure that it is not sampling if you are not setting seeds and you're still getting the same outputs)\r\n3. (If the answers to the questions above are positive) Can you please share a gist like the one you shared above, except without reliance on local data? It would help me get started 🤗 \r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> u have any guidance on matching outputs between lav\r\n\r\nCan you please help how you managed to convert this? I am also stuck is there any specific transformers version?",
"I have a PR here which aims to further verify equivalence: https://github.com/huggingface/transformers/pull/24854.\r\n\r\nThe conversion script can be found [here](https://github.com/NielsRogge/transformers/blob/improve_blip2/src/transformers/models/blip_2/convert_blip_2_original_to_pytorch.py) and can be run as follows:\r\n\r\n```\r\npip install -U git+https://github.com/nielsrogge/LAVIS.git@blip2_float32\r\ngit clone -b improve_blip2 git+https://github.com/nielsrogge/transformers.git\r\ncd transformers\r\npython src/transformers/models/blip_2/convert_blip_2_original_to_pytorch.py --model_name \"blip2-flan-t5-xl\"\r\n```\r\nThe reason I forked LAVIS is to make sure I can compare both implementations using float32.\r\n\r\n"
] | 1,676
| 1,690
| 1,680
|
NONE
| null |
### System Info
working:
- `transformers` version: 4.26.1
- Platform: Linux-6.0.12-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
broken:
- `transformers` version: 4.27.0.dev0
- Platform: Linux-6.0.12-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@gante @NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Start with clean env setup via https://github.com/salesforce/LAVIS/blob/main/requirements.txt (transformers-4.26.1)
2. Run `python test_simple.py`, model is correctly loaded and prints a caption
3. `pip install --upgrade git+https://github.com/huggingface/transformers` (I wanted the new shiny blip2 conversion script so I can conver my finetuned model into HF format)
4. `Resolved https://github.com/huggingface/transformers to commit 8b3db33a763ccef828fca89bac7e6cbff314f131`
5. Run `python test_simple.py`
6. `RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 25 but got size 5 for tensor number 1 in the list.`
```python
import torch
from lavis.models import load_model_and_preprocess
import torch
from PIL import Image
import requests
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model, vis_processors, _ = load_model_and_preprocess(name="blip2_opt", model_type="caption_coco_opt6.7b", is_eval=True, device=device)
url = "..."
raw_image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
data = model.generate({"image": image})
print(data)
```
### Expected behavior
Can use BLIP2 with latest HF
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21713/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21712
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21712/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21712/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21712/events
|
https://github.com/huggingface/transformers/issues/21712
| 1,592,585,800
|
I_kwDOCUB6oc5e7O5I
| 21,712
|
Transformers version 4.27.0.dev0
|
{
"login": "bg-uni",
"id": 53542735,
"node_id": "MDQ6VXNlcjUzNTQyNzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/53542735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bg-uni",
"html_url": "https://github.com/bg-uni",
"followers_url": "https://api.github.com/users/bg-uni/followers",
"following_url": "https://api.github.com/users/bg-uni/following{/other_user}",
"gists_url": "https://api.github.com/users/bg-uni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bg-uni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bg-uni/subscriptions",
"organizations_url": "https://api.github.com/users/bg-uni/orgs",
"repos_url": "https://api.github.com/users/bg-uni/repos",
"events_url": "https://api.github.com/users/bg-uni/events{/privacy}",
"received_events_url": "https://api.github.com/users/bg-uni/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This problem has been resolved.",
"Can you share how to solve this issue? \r\nThank you in advance. ",
"Hi @kksj216,\r\nUse the following steps\r\ngit clone https://github.com/huggingface/transformers.git\r\ncd transformers\r\npip install -e .",
"@tanmey007 Thanks for your help! \r\nBut that steps did not work for me... Still had the same error. 🥲 ",
"> @tanmey007 Thanks for your help! But that steps did not work for me... Still had the same error. 🥲\r\n\r\nI am using jupyter notebook, and used the following steps,\r\n!git clone https://github.com/huggingface/transformers.git\r\nimport os\r\nos.chdir('transformers')\r\n!pip install -e .\r\n",
"You can either install directly from the `main` branch: https://huggingface.co/docs/transformers/installation#install-from-source\r\n\r\nOr through an editable install: https://huggingface.co/docs/transformers/installation#editable-install",
"Now it works in jupyter notebook. I really appreciate all your help!! :)\r\n\r\nBut it still doesn't work in the terminal, is there any reason? Or are there other things I need to set up? ",
"@kksj216 If it is not working in the terminal, it's likely the environment has not been updated and the correct version of transformers is not being used. \r\n\r\nTo check which version of transformers is being run in the environment, you can run in the terminal: \r\n`python -c \"import transformers; print(transformers.__version__)\"`\r\n\r\nOr to see more information about the library, where it's installed etc:\r\n`pip show transformers`\r\n\r\nIf you wish to run from the development branch, then the instructions @sanchit-gandhi or @tanmey007 posted should be followed. If this doesn't work, then I suggest uninstalling `transformers` from your environment and then try installing again. ",
"> @kksj216 If it is not working in the terminal, it's likely the environment has not been updated and the correct version of transformers is not being used.\r\n> \r\n> To check which version of transformers is being run in the environment, you can run in the terminal: `python -c \"import transformers; print(transformers.__version__)\"`\r\n> \r\n> Or to see more information about the library, where it's installed etc: `pip show transformers`\r\n> \r\n> If you wish to run from the development branch, then the instructions @sanchit-gandhi or @tanmey007 posted should be followed. If this doesn't work, then I suggest uninstalling `transformers` from your environment and then try installing again.\r\n\r\nThank you for your kind help! \r\nI solved the issue now :) "
] | 1,676
| 1,679
| 1,676
|
NONE
| null |
### System Info
Hello,
I have a question.
I would like to upgrade Transformers to 4.27 because I get the following error when I run run_mlm.py.
The latest is 4.26.1 in pip install.
--------------------------------------------------------------------------
python run_mlm.py --model_type bert
Traceback (most recent call last):
File "C:\Users\d_test_user\Documents\test\transformers-main\examples\pytorch\language-modeling\run_mlm.py", line 56, in <module>
check_min_version("4.27.0.dev0")
File "C:\Users\d_test_user\Documents\transformer_example\.env\lib\site-packages\transformers\utils\__init__.py", line 208, in check_min_version
raise ImportError(
ImportError: This example requires a source install from HuggingFace Transformers (see `https://huggingface.co/transformers/installation.html#installing-from-source`), but the version found is 4.26.1.
Check out https://huggingface.co/transformers/examples.html for the examples corresponding to other versions of HuggingFace Transformers.
--------------------------------------------------------------------------
Thanks for your help!
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python run_mlm.py
Traceback (most recent call last):
File "C:\Users\d_test_user\Documents\test\transformers-main\examples\pytorch\language-modeling\run_mlm.py", line 56, in <module>
check_min_version("4.27.0.dev0")
File "C:\Users\d_test_user\Documents\transformer_example\.env\lib\site-packages\transformers\utils\__init__.py", line 208, in check_min_version
raise ImportError(
ImportError: This example requires a source install from HuggingFace Transformers (see `https://huggingface.co/transformers/installation.html#installing-from-source`), but the version found is 4.26.1.
Check out https://huggingface.co/transformers/examples.html for the examples corresponding to other versions of HuggingFace Transformers.
### Expected behavior
I want to create a bert model using example.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21712/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21711
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21711/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21711/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21711/events
|
https://github.com/huggingface/transformers/issues/21711
| 1,592,566,548
|
I_kwDOCUB6oc5e7KMU
| 21,711
|
Using run_mlm.py to pretrain a roberta base model from scratch outputs do not include <bos> or <eos> tokens
|
{
"login": "Rallio67",
"id": 121454712,
"node_id": "U_kgDOBz1AeA",
"avatar_url": "https://avatars.githubusercontent.com/u/121454712?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rallio67",
"html_url": "https://github.com/Rallio67",
"followers_url": "https://api.github.com/users/Rallio67/followers",
"following_url": "https://api.github.com/users/Rallio67/following{/other_user}",
"gists_url": "https://api.github.com/users/Rallio67/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rallio67/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rallio67/subscriptions",
"organizations_url": "https://api.github.com/users/Rallio67/orgs",
"repos_url": "https://api.github.com/users/Rallio67/repos",
"events_url": "https://api.github.com/users/Rallio67/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rallio67/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The model config.json have a notable difference between the roberta-base and my new pretrained roberta model.\r\n\r\nmax_position_embeddings in roberta-base is equal to 514, while in my new pretrained model it is set to 512.\r\n\r\nI also notice in the script there is a default setting to \"mask special tokens\"\r\n\r\nWe use this option because DataCollatorForLanguageModeling (see below) is more efficient when it receives the `special_tokens_mask`.\r\n\r\nreturn_special_tokens_mask=True,\r\n\r\nIs it possible that this is the source of the issue? Thank you for any help that can be offered on this problem.",
"cc @ArthurZucker and @younesbelkada ",
"Any updates on this? Would appreciate any help to identify the source of this bug.",
"Hey, this should probably be aske on the [`forum`](https://discuss.huggingface.co/) as it is not a bug and there we can reproduce your issue (the model is private). \r\n1. The training might have gone wrong. \r\n2. The `generation_config` or `config` file might be wrong. Both your `bos_token` and `eos_token` are wrong 0, and 2 changed to 8. \r\n\r\nIf you can check the `eos` and `pad` and `bos` token arguments and try to make sure that the inputs that you feed to the model are the same, would be great. \r\nAlso be careful with the formating of your issue, it is very hard to read. If you want an answer fast, this plays a bit against you 😉 ",
"Maybe there is some misunderstanding in what I posted. To the best of my knowledge I am using an unmodified, default training script from huggingface on a plain text file using the default configuration for roberta (a model that has been on HF for 2 years or more I think). I did a fresh install from source of transformers on a 8x A100 instance.\r\n\r\nsee here:\r\nhttps://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py\r\n\r\nI ran the script using only default configuration commands (they are posted above) on a text file using the default roberta configuration, but the outputs are never the correct 0 and 2. Any configuration I am using is automatically generated by the training script and then I am running the generation script exactly the same as I do with roberta-base, but substituting the model directory generated by the run_mlm.py script.\r\n\r\nIf I am running the script with all default parameters, I think it qualifies as a bug? ",
"Okay! Thanks for clarifying, I will have a look as soon as I can. It seems like a bug indeed",
"The troubleshooting I did myself on this makes me think it has something to do with the special tokens being attention masked in the training dataset preparation. Normally masking special tokens makes sense for some language models (like the `<pad>` token), but I think in this case for the BOS/EOS you don't want them masked. The reason token 8 is showing up in those positions is because the word \"and\" is extremely common and I think it minimizes overall loss by putting that token. It was never configured to use token 8 (early on in the training it would be a random token like period \".\" or \"the\" or \"and\". ). Overall the model is still training and working well, its just not ever generating the EOS/BOS token in the \"unmasked\" output.",
"Ok, that's fairly interesting. \r\nNormally when generating, the bos token should be `forced` via the logits processor. So if you generate using `model.generate` I am guessing that this won't happen even if you have the `masks`.\r\nIt is okay if there tokens are attention masked, I think they should always be forced (during training for example, the decoder input ids should always start with the `bos` so that it is not predicted, and then the loss is not computed on it. \r\nDoes that make sense?\r\n\r\n",
"The roberta-base and roberta-large models on huggingface when used with `model.generate` does properly create the BOS/EOS tokens. The output from my checkpoints inserts an extra first and last token, but the token is not BOS/EOS and appears to be learned. ",
"Is there any update about this issue, I'm facing the same error? @ArthurZucker ",
"> The troubleshooting I did myself on this makes me think it has something to do with the special tokens being attention masked in the training dataset preparation. Normally masking special tokens makes sense for some language models (like the <pad> token), but I think in this case for the BOS/EOS you don't want them masked. The reason token 8 is showing up in those positions is because the word \"and\" is extremely common and I think it minimizes overall loss by putting that token. It was never configured to use token 8 (early on in the training it would be a random token like period \".\" or \"the\" or \"and\". ). Overall the model is still training and working well, its just not ever generating the EOS/BOS token in the \"unmasked\" output.\r\nSo regarding the lead posted here by @Rallio67, I think I agree with him:\r\n- The special tokens should not be masked when computing the loss : the reason behind this that if you want the model to learn that it has to predict the `eos` and `bos` token when computing the loss, you should not mask them. This is visible as the model ends up learning to predict the most common words at the beginning and end, instead of predicting the bos and eos.\r\nI suggest trying out without the special mask, and if it works for you I'll try to find a fix that does not remove backward compatibility! \r\n",
"\r\nTraining without special tokens also doesn't work, not sure what is the reason then",
"Without special tokens or without special masks? ",
"I trained it with return_special_tokens_mask=False, but only for 3 epochs (is it possible that when I train it fully it's able to learn) ?",
"Yep, if you can would be great to see after the same amount of training as the model that raised the issue.",
"I trained the model for 75 epochs, still <bos> and <eos> tokens are not appearing",
"Hey! I won't really have time to dive deep into this one, If you could share some example inputs that are fed to the model (forgot to ask for the context of `my_text.txt`, but if the tokenizer does not pass bos and eos (by that I mean does not add them) it might be either the default roberta tokenizer that can't be used out of the box for this or something else. ",
"Okay, here is a very relevant comment : https://github.com/huggingface/transformers/issues/22794#issuecomment-1598977285, it is important to make sure that when the script calls `torch_mask_tokens`, the loss is only computed on the masked tokens (and since there is a call to `masked_fill_(special_tokens_mask, value=0.0)`, which creates the probability of masking special tokens, setting is to `0`. This means that the next call:\r\n```python \r\n probability_matrix.masked_fill_(special_tokens_mask, value=0.0)\r\n masked_indices = torch.bernoulli(probability_matrix).bool()\r\n labels[~masked_indices] = -100 # We only compute loss on masked tokens\r\n```\r\nwill set the labels for `eos` and `bos` to `-100` always ignoring them. \r\n\r\nIf you remove the special tokens mask, it is automatically created using `get_special_tokens_mask` which is why the tokens are not learned either. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,690
| 1,690
|
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: deepspeed
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I am attempting to train a roberta-base model using the defaults on a custom corpus.
deepspeed --num_gpus 8 run_mlm.py
--model_type roberta
--max_seq_length 128
--do_train
--per_device_train_batch_size 512
--fp16
--save_total_limit 3
--num_train_epochs 30
--deepspeed ds_config.json
--learning_rate 1e-4
--eval_steps 50
--max_eval_samples 4000
--evaluation_strategy steps
--tokenizer "roberta-large"
--warmup_steps 30000
--adam_beta1 0.9
--adam_beta2 0.98
--adam_epsilon 1e-6
--weight_decay 0.01
--lr_scheduler_type linear
--preprocessing_num_workers 8
--train_file my_text.txt
--line_by_line
--output_dir my_roberta_base
The training works and the loss goes down and the accuracy goes up. However, when I compare the outputs to the original roberta-base I see a behavior that appears to be a glitch or problem with the training.
### Expected behavior
Expected behavior using roberta-base from huggingface hub shows the first and last token of the output being the `<bos>` and `<eos>` tokens, respectively, while my new trained roberta-base model is showing token #8 ( and). I think this was learned instead of being automatically set to <bos> and <eos> like the expected behavior should be for this script.
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model1 = AutoModelForMaskedLM.from_pretrained("roberta-base", torch_dtype=torch.float16).cuda(0)
model2 = AutoModelForMaskedLM.from_pretrained("rob_wiki_base", torch_dtype=torch.float16).cuda(0)
text="The main causes of death for <mask> are human-related issues, such as habitat destruction and human objects. Their slow-moving, curious <mask> has led to violent collisions with propeller-driven boats and ships. Some manatees have been found with over 50 scars on them from propeller <mask>. Natural causes of death include adverse temperatures, predation by <mask> on young, and disease."
input = tokenizer(text, truncation=True, padding=True, return_tensors="pt")
output1=model1(input["input_ids"].cuda(0))
output2 = model2(input["input_ids"].cuda(0))
predicted_token_id1 = output1[0][0].argmax(axis=-1)
predicted_token_id2 = output2[0][0].argmax(axis=-1)
print("Original roberta-base output:")
print(predicted_token_id1)
print(tokenizer.decode(predicted_token_id1))
print("-"*20)
print("My new roberta-base output:")
print(predicted_token_id2)
print(tokenizer.decode(predicted_token_id2))
print("-"*20)
```
Original roberta-base output:
tensor([ 0, 133, 1049, 4685, 9, 744, 13, 18018, 32, 1050,
12, 3368, 743, 6, 215, 25, 14294, 8181, 8, 1050,
8720, 4, 2667, 2635, 12, 19838, 6, 10691, 3650, 34,
669, 7, 4153, 25062, 19, 39238, 12853, 12, 9756, 8934,
8, 7446, 4, 993, 313, 877, 293, 33, 57, 303,
19, 81, 654, 26172, 15, 106, 31, 39238, 12853, 5315,
4, 7278, 4685, 9, 744, 680, 12661, 3971, 6, 12574,
1258, 30, 22139, 15, 664, 6, 8, 2199, 4, 2],
device='cuda:0')
<s>The main causes of death for whales are human-related issues, such as habitat destruction and human objects. Their slow-moving, curious behavior has led to violent collisions with propeller-driven boats and ships. Some manatees have been found with over 50 scars on them from propeller strikes. Natural causes of death include adverse temperatures, predation by predators on young, and disease.</s>
My new roberta-base output:
tensor([ 8, 133, 1049, 4685, 9, 744, 13, 5868, 32, 1050,
12, 3368, 743, 6, 215, 25, 14294, 8181, 8, 1050,
8720, 4, 2667, 2635, 12, 19838, 6, 10691, 2574, 34,
669, 7, 4153, 25062, 19, 39238, 12853, 12, 9756, 8934,
8, 7446, 4, 993, 313, 877, 293, 33, 57, 303,
19, 81, 654, 26172, 15, 106, 31, 39238, 12853, 5315,
4, 7278, 4685, 9, 744, 680, 12661, 3971, 6, 12574,
1258, 30, 5868, 15, 664, 6, 8, 2199, 4, 8],
device='cuda:0')
andThe main causes of death for humans are human-related issues, such as habitat destruction and human objects. Their slow-moving, curious nature has led to violent collisions with propeller-driven boats and ships. Some manatees have been found with over 50 scars on them from propeller strikes. Natural causes of death include adverse temperatures, predation by humans on young, and disease. and
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21711/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21710
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21710/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21710/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21710/events
|
https://github.com/huggingface/transformers/pull/21710
| 1,592,422,838
|
PR_kwDOCUB6oc5KX-ly
| 21,710
|
Fix TVLT (torch device issue)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
Just a few fixes for TVLT (torch device issue).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21710/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21710",
"html_url": "https://github.com/huggingface/transformers/pull/21710",
"diff_url": "https://github.com/huggingface/transformers/pull/21710.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21710.patch",
"merged_at": 1676975869000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21709
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21709/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21709/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21709/events
|
https://github.com/huggingface/transformers/pull/21709
| 1,592,394,220
|
PR_kwDOCUB6oc5KX4fe
| 21,709
|
Fix `get_class_in_module`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
The PR #21646 added a line `subprocess.run(["python", "-c", cmd])`. But in our daily CI (docker env.), `python` binary doesn't exist, only `python3` binary exists, and this causes `FileNotFoundError: [Errno 2] No such file or directory: 'python'`.
This PRs adds `try ... except...` to avoid this failure, but it's really ugly. See my comment in this PR changes.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21709/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21709",
"html_url": "https://github.com/huggingface/transformers/pull/21709",
"diff_url": "https://github.com/huggingface/transformers/pull/21709.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21709.patch",
"merged_at": 1676968755000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21708
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21708/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21708/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21708/events
|
https://github.com/huggingface/transformers/pull/21708
| 1,592,368,399
|
PR_kwDOCUB6oc5KXy7p
| 21,708
|
Auto api Value Error addition to Troubleshoot
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
Given the existence of "exotic" models that don't have a mapping to auto classes, this PR adds a small section at the end of the Troubleshooting guide about the error raised when trying to load a model with Auto API when there's no mapping.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21708/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21708",
"html_url": "https://github.com/huggingface/transformers/pull/21708",
"diff_url": "https://github.com/huggingface/transformers/pull/21708.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21708.patch",
"merged_at": 1677171079000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21707
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21707/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21707/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21707/events
|
https://github.com/huggingface/transformers/pull/21707
| 1,592,252,206
|
PR_kwDOCUB6oc5KXaaA
| 21,707
|
[`Blip2`] Fix Blip-2 multi gpu
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the suggestions! Adapted the code accordingly\r\nHowever, there seems to be something off with the current `accelerate` integration w `blip2` - here since we are calling `generate` of a module of `Blip2ForConditionalGeneration`, I think that we are facing an edge case that I am not sure whether the fix should be here or on `accelerate`\r\n\r\nI think there are 2 challenging points\r\n\r\n1- if a users uses a device_map such as `auto` or `balanced` sometimes the `language_model` attribute will not have any `_hf_hook` attribute (from what I have got only the modules that are on the device_map and the parent module will have an `_hf_hook`), leading to the output of `language_model` not being on the correct device (as the output device will be determined by `language_model.lm_head._hf_hook`). \r\n\r\n2- If a well-educated user passes a custom device_map such as:\r\n```python\r\ndevice_map = {\r\n \"query_tokens\": 0,\r\n \"vision_model\":0,\r\n \"language_model\": 1,\r\n \"language_projection\": 0,\r\n \"qformer\": 0,\r\n}\r\n```\r\nthe `language_model` attribute will indeed have a `_hf_hook`, however the output of the module will not be set to the correct device as per my understanding, `_hf_hook.io_same_device` is set to `True` only on the parent class. I proposed a hacky solution at e0104f0 but I am not happy with it. \r\n\r\nSo I think that maybe a fix should be upstreamed on `accelerate` to enable some child modules behave as the parent module (i.e. have the same `_hf_hook` behaviour), or maybe on `generate` to allow output on a different device that the input, meaning that we add multiple `.to` in `sample`, `greedy`, etc.\r\nI am not sure what is the best solution here and I am sure that there is something simpler we can try!\r\nHere is a script to reproduce the initial bug:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import Blip2ForConditionalGeneration, Blip2Processor\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\nfrom PIL import Image\r\nimport requests\r\n\r\n\r\nurl = \"https://huggingface.co/hf-internal-testing/blip-test-image/resolve/main/demo.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\n\r\nmodel_t5 = AutoModelForSeq2SeqLM.from_pretrained(\"google/flan-t5-small\", device_map=\"balanced\", torch_dtype=torch.float16)\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/flan-t5-small\")\r\n\r\nprocessor = Blip2Processor.from_pretrained(\"Salesforce/blip2-flan-t5-xl\")\r\n\r\ndevice_map = {\r\n \"query_tokens\": 0,\r\n \"vision_model\":0,\r\n \"language_model\": 1,\r\n \"language_projection\": 0,\r\n \"qformer\": 0,\r\n}\r\n\r\n# can also try with custom device_map\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-flan-t5-xl\", device_map=\"balanced\", torch_dtype=torch.float16)\r\n\r\ninputs = processor(images=image, return_tensors=\"pt\").to(0, torch.float16)\r\n\r\nprint(model_t5._hf_hook.execution_device)\r\nprint(model._hf_hook.execution_device)\r\n\r\nprint(model.language_model._hf_hook.execution_device) # this will fail\r\n\r\npredictions = model.generate(**inputs, do_sample=True)\r\ngenerated_text = processor.decode(predictions[0], skip_special_tokens=True)\r\nprint(generated_text)\r\n```",
"I can confirm the current implementation works is a users passes a `device_map` that has `language_model` , let me know if you see anything else that needs to be addressed!",
"Thank you very much @akkikiki for the very useful feedback! So just to summarize, we need:\r\n1- A more explicit warning pointing to the links you have shared so that users can understand how to use correct `device_map`\r\n2- The fix you proposed in https://github.com/huggingface/transformers/pull/21707#discussion_r1119275324 to make it work for some edge cases where the masks are spread across different devices in the case when n_gpus > 2 (I can only test on a enviornment where I have 2 GPUs for now)\r\nIs that correct? ",
"> Thank you very much @akkikiki for the very useful feedback! So just to summarize, we need: 1- A more explicit warning pointing to the links you have shared so that users can understand how to use correct `device_map` 2- The fix you proposed in [#21707 (comment)](https://github.com/huggingface/transformers/pull/21707#discussion_r1119275324) to make it work for some edge cases where the masks are spread across different devices in the case when n_gpus > 2 (I can only test on a enviornment where I have 2 GPUs for now) Is that correct?\r\n\r\nExactly! But the first one is just a suggestion so feel free to discard it :)",
"Thanks a mile @akkikiki , may I ask you to run the latest changes that I made on your side to confirm everything works as expected? Then we can merge I think! ",
"Re-installed your latest branch and works perfectly fine! Thanks a lot @younesbelkada!!",
"Hi, I'm still getting errors with this weirdly enough (am on 4.29.0 which I believe should have this fix included). \r\n\r\nI'm running the code from the model tutorial copy pasted:\r\n```python\r\nimport torch\r\nimport requests\r\nfrom PIL import Image\r\nfrom transformers import Blip2Processor, Blip2ForConditionalGeneration\r\n\r\nprocessor = Blip2Processor.from_pretrained(\"Salesforce/blip2-flan-t5-xxl\")\r\n\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-flan-t5-xxl\", \r\n load_in_8bit=True, \r\n device_map=\"auto\",\r\n )\r\n\r\nimg_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' \r\nraw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')\r\n\r\nquestion = \"how many dogs are in the picture?\"\r\ninputs = processor(raw_image, question, return_tensors=\"pt\").to(\"cuda\", torch.float16)\r\n\r\nout = model.generate(**inputs)\r\nprint(processor.decode(out[0], skip_special_tokens=True))\r\n```\r\nwhich gives\r\n```\r\n/proj/vondrick4/sachit/miniconda3/envs/pyt13/lib/python3.10/site-packages/torch/utils/_contextli │\r\n│ b.py:115 in decorate_context │\r\n│ │\r\n│ 112 │ @functools.wraps(func) │\r\n│ 113 │ def decorate_context(*args, **kwargs): │\r\n│ 114 │ │ with ctx_factory(): │\r\n│ ❱ 115 │ │ │ return func(*args, **kwargs) │\r\n│ 116 │ │\r\n│ 117 │ return decorate_context │\r\n│ 118 │\r\n│ │\r\n│ /proj/vondrick4/sachit/miniconda3/envs/pyt13/lib/python3.10/site-packages/transformers/models/bl │\r\n│ ip_2/modeling_blip_2.py:1854 in generate │\r\n│ │\r\n│ 1851 │ │ inputs_embeds = self.get_input_embeddings()(input_ids) │\r\n│ 1852 │ │ inputs_embeds = torch.cat([language_model_inputs, inputs_embeds.to(language_mode │\r\n│ 1853 │ │ │\r\n│ ❱ 1854 │ │ outputs = self.language_model.generate( │\r\n│ 1855 │ │ │ inputs_embeds=inputs_embeds, │\r\n│ 1856 │ │ │ attention_mask=attention_mask, │\r\n│ 1857 │ │ │ **generate_kwargs, │\r\n│ │\r\n│ /proj/vondrick4/sachit/miniconda3/envs/pyt13/lib/python3.10/site-packages/torch/utils/_contextli │\r\n│ b.py:115 in decorate_context │\r\n│ │\r\n│ 112 │ @functools.wraps(func) │\r\n│ 113 │ def decorate_context(*args, **kwargs): │\r\n│ 114 │ │ with ctx_factory(): │\r\n│ ❱ 115 │ │ │ return func(*args, **kwargs) │\r\n│ 116 │ │\r\n│ 117 │ return decorate_context │\r\n│ 118 │\r\n│ │\r\n│ /proj/vondrick4/sachit/miniconda3/envs/pyt13/lib/python3.10/site-packages/transformers/generatio │\r\n│ n/utils.py:1515 in generate │\r\n│ │\r\n│ 1512 │ │ │ │ ) │\r\n│ 1513 │ │ │ │\r\n│ 1514 │ │ │ # 11. run greedy search │\r\n│ ❱ 1515 │ │ │ return self.greedy_search( │\r\n│ 1516 │ │ │ │ input_ids, │\r\n│ 1517 │ │ │ │ logits_processor=logits_processor, │\r\n│ 1518 │ │ │ │ stopping_criteria=stopping_criteria, │\r\n│ │\r\n│ /proj/vondrick4/sachit/miniconda3/envs/pyt13/lib/python3.10/site-packages/transformers/generatio │\r\n│ n/utils.py:2372 in greedy_search │\r\n│ │\r\n│ 2369 │ │ │ if eos_token_id is not None: │\r\n│ 2370 │ │ │ │ if pad_token_id is None: │\r\n│ 2371 │ │ │ │ │ raise ValueError(\"If `eos_token_id` is defined, make sure that `pad_ │\r\n│ ❱ 2372 │ │ │ │ next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - u │\r\n│ 2373 │ │ │ │\r\n│ 2374 │ │ │ # update generated ids, model inputs, and length for next step │\r\n│ 2375 │ │ │ input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1) │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0!\r\n```\r\nAny ideas?",
"@sachit-menon \r\nThe issue is that `device_map=auto` (at least for the recent HF versions that I touched) places the language model head at the later in GPUs (in this case `cuda:7`), but the original inputs are placed in the first GPU i.e., `cuda:0`\r\n\r\nCan you run \r\n```\r\nfrom accelerate import init_empty_weights, infer_auto_device_map\r\nfrom transformers import Blip2Processor, Blip2ForConditionalGeneration\r\n\r\nwith init_empty_weights():\r\n model = Blip2ForConditionalGeneration(config)\r\n device_map = infer_auto_device_map(model, no_split_module_classes=[\"T5Block\"])\r\ndevice_map['language_model.lm_head'] = device_map[\"language_model.decoder.embed_tokens\"] # to make the genearted tokens and input_ids to be on the same device\r\n```\r\nand use the created `device_map` and set it as follows\r\n```\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-flan-t5-xxl\", \r\n load_in_8bit=True, \r\n device_map=device_map,\r\n )\r\n```\r\ninstead of `device_map=\"auto\"` when initializing the model?\r\n",
"Thanks for the quick response. Now I get in the last line of the first block: `KeyError: 'language_model.decoder.embed_tokens'` ",
"@sachit-menon Ops, sorry. Try out \r\n```\r\nmax_memory={i: \"10GiB\" for i in range(8)}\r\nconfig = Blip2Config.from_pretrained(model_id)\r\nwith init_empty_weights():\r\n model = Blip2ForConditionalGeneration(config)\r\n device_map = infer_auto_device_map(model, no_split_module_classes=[\"T5Block\"], dtype=torch.float16, max_memory=max_memory)\r\ndevice_map['language_model.lm_head'] = device_map[\"language_model.encoder.embed_tokens\"]\r\n```\r\nor tweak `10GiB` to be adjusted to your GPU memory you have (the above worked with 16GB GPU).",
"is this supposed to work on t5-large?\r\n\r\n```\r\nconfig = AutoConfig.from_pretrained(\"t5-large\")\r\nwith init_empty_weights():\r\n model = AutoModelForSeq2SeqLM.from_config(config)\r\n \r\ndevice_map = infer_auto_device_map(model, no_split_module_classes=[\"T5Block\"], max_memory={i:1 for i in range(4)})\r\nprint(device_map)\r\ndevice_map['lm_head'] = device_map['encoder.embed_tokens']\r\n\r\nself.model = AutoModelForSeq2SeqLM.from_pretrained(self.hparams.model_name_or_path, load_in_8bit=True, device_map='auto', cache_dir=\"model_cache2\")\r\n```\r\n\r\nleads to this error in the training step:\r\n\r\n```\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py\", line 260, in forward\r\n return self.weight * hidden_states\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!\r\n```",
"@cassianlewis Yes, replace\r\n`self.model = AutoModelForSeq2SeqLM.from_pretrained(self.hparams.model_name_or_path, load_in_8bit=True, device_map='auto', cache_dir=\"model_cache2\")` \r\nwith \r\n`self.model = AutoModelForSeq2SeqLM.from_pretrained(self.hparams.model_name_or_path, load_in_8bit=True, device_map=device_map cache_dir=\"model_cache2\")`",
"Hey @akkikiki \r\nSorry, that was a typo in my original comment - I already tried using `device_map = device_map`\r\n\r\nI tried `device_map['lm_head'] = device_map['encoder.embed_tokens']` and separately`device_map['lm_head'] = device_map['decoder.embed_tokens']` as suggested in https://github.com/akkikiki/huggingface_examples/blob/main/examples/load_flan_ul2.py\r\n\r\nNone of this worked...\r\n\r\n",
"> device_map['lm_head'] = device_map['encoder.embed_tokens']\r\n\r\noh~i got the same error ,could you tell me how to fit it",
"> @sachit-menon Ops, sorry. Try out\r\n> \r\n> ```\r\n> max_memory={i: \"10GiB\" for i in range(8)}\r\n> config = Blip2Config.from_pretrained(model_id)\r\n> with init_empty_weights():\r\n> model = Blip2ForConditionalGeneration(config)\r\n> device_map = infer_auto_device_map(model, no_split_module_classes=[\"T5Block\"], dtype=torch.float16, max_memory=max_memory)\r\n> device_map['language_model.lm_head'] = device_map[\"language_model.encoder.embed_tokens\"]\r\n> ```\r\n> \r\n> or tweak `10GiB` to be adjusted to your GPU memory you have (the above worked with 16GB GPU).\r\n\r\n@akkikiki i tried this way ,but it did not work. Can you tell me what the model_id are,in\r\n\r\n__ config = Blip2Config.from_pretrained(model_id)\r\nthe error is:\r\ndevice_map['language_model.lm_head'] = device_map[\"language_model.encoder.embed_tokens\"]\r\nKeyError: 'language_model.encoder.embed_tokens'\r\n",
"I noticed the `infer_auto_device_map()` returns an empty dict. The only solution that worked was to manually set the device map keys:\r\n\r\n```\r\nif torch.cuda.device_count() > 1:\r\n device_map = {\r\n \"query_tokens\": 0,\r\n \"vision_model\":0,\r\n \"language_model\": 1,\r\n \"language_projection\": 0,\r\n \"qformer\": 0,\r\n }\r\nelse:\r\n device_map = \"auto\"\r\n```"
] | 1,676
| 1,697
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR should fix all the issues related to BLIP-2 and multi-gpu
Before this PR, BLIP2 had incorrect `set_input_embeddings` and `get_input_embeddings` functions, leading to unexpected behaviours when using it with `accelerate` since it `accelerate` gets confused when creating a device map with incorrect tied weights.
Do not merge before I figure out why this does not fix the behaviour with `blip2-flan-t5`. EDIT: should work properly now, there are some corner cases that users can encounter but if they follow strictly the guidelines presented in https://github.com/huggingface/blog/blob/main/accelerate-large-models.md , it should be fine.
Fixes: https://github.com/TimDettmers/bitsandbytes/issues/153 & https://github.com/huggingface/transformers/pull/21441#issuecomment-1435370577
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21707/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21707",
"html_url": "https://github.com/huggingface/transformers/pull/21707",
"diff_url": "https://github.com/huggingface/transformers/pull/21707.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21707.patch",
"merged_at": 1677601739000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21706
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21706/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21706/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21706/events
|
https://github.com/huggingface/transformers/issues/21706
| 1,592,159,706
|
I_kwDOCUB6oc5e5m3a
| 21,706
|
Bloom's hidden states are None
|
{
"login": "minhngh",
"id": 48380528,
"node_id": "MDQ6VXNlcjQ4MzgwNTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/48380528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minhngh",
"html_url": "https://github.com/minhngh",
"followers_url": "https://api.github.com/users/minhngh/followers",
"following_url": "https://api.github.com/users/minhngh/following{/other_user}",
"gists_url": "https://api.github.com/users/minhngh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minhngh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minhngh/subscriptions",
"organizations_url": "https://api.github.com/users/minhngh/orgs",
"repos_url": "https://api.github.com/users/minhngh/repos",
"events_url": "https://api.github.com/users/minhngh/events{/privacy}",
"received_events_url": "https://api.github.com/users/minhngh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I found I had not checked the code carefully. This problem could be solved by passing the argument`output_hidden_states = True` to the `forward`"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
Hi everyone,
I used BloomForCausalLM which was pretrained and released by BigScience. I then forwarded input tensors to the model, and I tried to get the corresponding hidden states. However, the error was raised, which said that hidden states are None.
Has anyone encountered the same problem. Thank you very much.
https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/bloom/modeling_bloom.py#L935
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21706/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21705
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21705/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21705/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21705/events
|
https://github.com/huggingface/transformers/issues/21705
| 1,592,081,683
|
I_kwDOCUB6oc5e5T0T
| 21,705
|
a bug in transformers.GenerationMixin
|
{
"login": "lixinliu1995",
"id": 51687594,
"node_id": "MDQ6VXNlcjUxNjg3NTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/51687594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lixinliu1995",
"html_url": "https://github.com/lixinliu1995",
"followers_url": "https://api.github.com/users/lixinliu1995/followers",
"following_url": "https://api.github.com/users/lixinliu1995/following{/other_user}",
"gists_url": "https://api.github.com/users/lixinliu1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lixinliu1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lixinliu1995/subscriptions",
"organizations_url": "https://api.github.com/users/lixinliu1995/orgs",
"repos_url": "https://api.github.com/users/lixinliu1995/repos",
"events_url": "https://api.github.com/users/lixinliu1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/lixinliu1995/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @lixinliu1995 👋 I'm not sure I follow, `unfinished_sequences` holds a boolean for each sentence and is updated each iteration on L2247 (if a sentence contains any of the eos tokens -> set to false)\r\n\r\nHow would you expect it to behave?",
"yeah, you are right. It is my mistake.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,679
| 1,679
|
NONE
| null |
### System Info
transformers 4.27.0. In the greedy_search method of the GenerationMixin class, the unfinished_sequences variable at line 2177 should be placed inside the body of the 'while' loop so that it gets longer and longer with the decoding, Otherwise it is always a value rather than a sequence.
@gante @gante
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
while True:
unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)
if synced_gpus:
pass
ba la ba la ...
### Expected behavior
Fix as soon as possible
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21705/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21704
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21704/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21704/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21704/events
|
https://github.com/huggingface/transformers/pull/21704
| 1,592,044,875
|
PR_kwDOCUB6oc5KWuGK
| 21,704
|
Adding task guides to resources
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,677
| 1,676
|
CONTRIBUTOR
| null |
Previously, we have added links to compatible models to task guides. This PR enables navigation in the opposite direction and adds links to relevant task guides (based on model mapping) to the list of resources in model docs. Those who land on the model docs should now be able to find relevant task guides quicker. The links are added to the list of resources (along with previously listed notebooks, blog posts, scripts, etc.)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21704/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21704/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21704",
"html_url": "https://github.com/huggingface/transformers/pull/21704",
"diff_url": "https://github.com/huggingface/transformers/pull/21704.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21704.patch",
"merged_at": 1676993712000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21703
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21703/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21703/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21703/events
|
https://github.com/huggingface/transformers/pull/21703
| 1,591,940,567
|
PR_kwDOCUB6oc5KWX4R
| 21,703
|
Fix typo in `PROCESSOR_MAPPING_NAMES` and add tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Failing test is irrelevant"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
Fix typo in `PROCESSOR_MAPPING_NAMES` and add a new repo check: we check all names in auto (name) mapping are defined in the library.
The effect of this PR: (if `GITProcessor` is not fixed)
```bash
Checking all names in auto name mappings are defined.
Traceback (most recent call last):
File "C:\Users\33611\Desktop\Project\transformers-hf-gcp\utils\check_repo.py", line 922, in <module>
check_repo_quality()
File "C:\Users\33611\Desktop\Project\transformers-hf-gcp\utils\check_repo.py", line 918, in check_repo_quality
check_all_auto_object_names_being_defined()
File "C:\Users\33611\Desktop\Project\transformers-hf-gcp\utils\check_repo.py", line 640, in check_all_auto_object_names_being_defined
raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures))
Exception: There were 1 failures:
`GITProcessor` appears in the mapping `PROCESSOR_MAPPING_NAMES` but it is not defined in the library.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21703/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21703",
"html_url": "https://github.com/huggingface/transformers/pull/21703",
"diff_url": "https://github.com/huggingface/transformers/pull/21703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21703.patch",
"merged_at": 1676968706000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21702
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21702/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21702/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21702/events
|
https://github.com/huggingface/transformers/pull/21702
| 1,591,919,691
|
PR_kwDOCUB6oc5KWTYm
| 21,702
|
[SpeechT5HifiGan] Handle batched inputs
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(failing test is unrelated)"
] | 1,676
| 1,687
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Modifies the SpeechT5 HiFiGAN model to accept batched inputs. This PR is **not** a breaking change:
* If the spectrogram inputs are un-batched, the waveform outputs are un-batched (as before)
* If the spectrogram inputs are batched, the waveform outputs are batched (new)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21702/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21702",
"html_url": "https://github.com/huggingface/transformers/pull/21702",
"diff_url": "https://github.com/huggingface/transformers/pull/21702.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21702.patch",
"merged_at": 1677061017000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21701
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21701/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21701/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21701/events
|
https://github.com/huggingface/transformers/pull/21701
| 1,591,859,127
|
PR_kwDOCUB6oc5KWG7o
| 21,701
|
remove position ids and token type ids from forward args in docstring
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Applies to all the heads, that's why I removed it from the `GPT_NEOX_INPUTS_DOCSTRING` ! Or do you mean other models? "
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #21567 which indicated that the docstring for the GPTNeoX model dose not match with the forwards' arguments.
Removed `position_ids` and `token_type_ids`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21701/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21701",
"html_url": "https://github.com/huggingface/transformers/pull/21701",
"diff_url": "https://github.com/huggingface/transformers/pull/21701.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21701.patch",
"merged_at": 1676959297000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21700
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21700/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21700/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21700/events
|
https://github.com/huggingface/transformers/pull/21700
| 1,591,487,048
|
PR_kwDOCUB6oc5KU2iw
| 21,700
|
Respect documentation on passive log level
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"while at it could you please remind me what does `passive` actually imply?\r\n\r\n> a 'passive' level which doesn't set anything and lets the application set the level\r\n\r\nhow does the application set the level? which application? perhaps we need a small example?",
"Hmm, this broke many deepspeed tests that were relying on info log level in deepspeed.\r\n\r\nnow deepspeed is no longer logging info `\"DeepSpeed info: version={}, git-hash={}, git-branch={}\"`\r\n\r\nand thus the tests fail as it is looking for this string to tell DS is running.\r\n\r\nNow, why would this change impact an underlying component I wonder.",
"OK, I now have to explicitly pass ` log_level=\"info\" to trainer args to have the previous functionality. This looks like a BC breakage, no?\r\n\r\nI adapted `get_regression_trainer` to have the original behavior here: https://github.com/huggingface/transformers/pull/21769 so it's all back to working.",
"Yes, I did mention it changed the behavior in the description of the PR and asked for how to proceed. You and Lysandre both agreed the break was worth it in this case.",
"Totally, Sylvain. I guess I struggle to understand when a BC breakage is ok and when it's not."
] | 1,676
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
The documentation states that setting a `log_level` to `"passive"` in the training arguments won't touch the log level, but this is not the case. Currently, setting `log_level` to `"passive"` is the same as setting it to `"info"`.
Likewise, setting `log_level_replica` to `"passive"` is the same as setting it to `"warning"`.
This PR fixes this and changes the default of `log_level_replica` to `"warning"` to have the same default for it. The question is whether we should change the default of `log_level` to `"info"` to have the same behavior as before, or leave it as is which would set it to warning unless the user has set their own Transformers verbosity to info like in the examples.
Related to #20154
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21700/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21700",
"html_url": "https://github.com/huggingface/transformers/pull/21700",
"diff_url": "https://github.com/huggingface/transformers/pull/21700.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21700.patch",
"merged_at": 1677055159000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21699
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21699/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21699/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21699/events
|
https://github.com/huggingface/transformers/pull/21699
| 1,591,458,651
|
PR_kwDOCUB6oc5KUwmQ
| 21,699
|
Graphormer fix
|
{
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,677
| 1,677
|
MEMBER
| null |
Removes failing call to `requires_backend` in Graphormer model since the model uses `is_cython_available` instead.
@ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21699/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21699",
"html_url": "https://github.com/huggingface/transformers/pull/21699",
"diff_url": "https://github.com/huggingface/transformers/pull/21699.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21699.patch",
"merged_at": 1677223253000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21698
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21698/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21698/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21698/events
|
https://github.com/huggingface/transformers/pull/21698
| 1,591,455,454
|
PR_kwDOCUB6oc5KUv5x
| 21,698
|
Pass along revision in dynamic code fetch
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
The `revision` argument wasn't passed along to `cached_file` when fetching a dynamic config/modeling file, this PR fixes that.
Partially fixes #21662
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21698/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21698",
"html_url": "https://github.com/huggingface/transformers/pull/21698",
"diff_url": "https://github.com/huggingface/transformers/pull/21698.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21698.patch",
"merged_at": 1676924502000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21697
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21697/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21697/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21697/events
|
https://github.com/huggingface/transformers/pull/21697
| 1,591,455,077
|
PR_kwDOCUB6oc5KUv0c
| 21,697
|
Fix-rag-finetune-project-requirement
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
Should fix #21692, the requirements for pytorch lightnings need to be pinned to `<=1.60`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21697/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21697",
"html_url": "https://github.com/huggingface/transformers/pull/21697",
"diff_url": "https://github.com/huggingface/transformers/pull/21697.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21697.patch",
"merged_at": 1676910219000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21696
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21696/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21696/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21696/events
|
https://github.com/huggingface/transformers/pull/21696
| 1,590,920,219
|
PR_kwDOCUB6oc5KS8py
| 21,696
|
added file
|
{
"login": "lenni991",
"id": 82563121,
"node_id": "MDQ6VXNlcjgyNTYzMTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/82563121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lenni991",
"html_url": "https://github.com/lenni991",
"followers_url": "https://api.github.com/users/lenni991/followers",
"following_url": "https://api.github.com/users/lenni991/following{/other_user}",
"gists_url": "https://api.github.com/users/lenni991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lenni991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lenni991/subscriptions",
"organizations_url": "https://api.github.com/users/lenni991/orgs",
"repos_url": "https://api.github.com/users/lenni991/repos",
"events_url": "https://api.github.com/users/lenni991/events{/privacy}",
"received_events_url": "https://api.github.com/users/lenni991/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"commit pr",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21696/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21696",
"html_url": "https://github.com/huggingface/transformers/pull/21696",
"diff_url": "https://github.com/huggingface/transformers/pull/21696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21696.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21695
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21695/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21695/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21695/events
|
https://github.com/huggingface/transformers/pull/21695
| 1,590,834,531
|
PR_kwDOCUB6oc5KSsHv
| 21,695
|
fix LayoutLMv3TokenizerFast subword label after 'Ġ' token
|
{
"login": "thibaultdouzon",
"id": 23520944,
"node_id": "MDQ6VXNlcjIzNTIwOTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/23520944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thibaultdouzon",
"html_url": "https://github.com/thibaultdouzon",
"followers_url": "https://api.github.com/users/thibaultdouzon/followers",
"following_url": "https://api.github.com/users/thibaultdouzon/following{/other_user}",
"gists_url": "https://api.github.com/users/thibaultdouzon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thibaultdouzon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thibaultdouzon/subscriptions",
"organizations_url": "https://api.github.com/users/thibaultdouzon/orgs",
"repos_url": "https://api.github.com/users/thibaultdouzon/repos",
"events_url": "https://api.github.com/users/thibaultdouzon/events{/privacy}",
"received_events_url": "https://api.github.com/users/thibaultdouzon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21695). All of your documentation changes will be reflected on that endpoint.",
"Also cc @amyeroberts ",
"Hi @ArthurZucker, thanks for your investigations.\r\n\r\nThis PR fixes the problem for LayoutLMv3 but I expect the problem to exist on other models using Fast BPE tokenization, I will take a look when I can to list all impacted models that need a fix.",
"Thanks a lot for this fix, would you be able to take into account my comment such that we can merge it? 🙏 \r\n\r\nThanks!\r\n\r\nBtw the same fix could then be applied to LayoutLMv2 and LayoutXLM",
"LayoutLMv2 uses WordPiece and not BPE. From what I saw its vocabulary does not contain empty token and thus cannot produce (0, 0) offset_mapping when encoding.\r\n"
] | 1,676
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
LayoutLMv3TokenizerFast produces empty 'Ġ' token with `offset_mapping = (0, 0)`.
Next token is wrongly assumed to also be beginning of word and isn't correctly assigned `pad_token_label`.
This may lead to misalignment of words and token representations.
Other BPE tokenizers might be affected
Add check for previous token if it had an empty `offset_mapping` (not including special tokens)
Remove copy check from LayoutLMv2TokenizerFast for `_batch_encode_plus` because it is not affected (uses WordPiece instead of BPE)
Modify test with text that produce 'Ġ' token.
Fixes issue: #19978
@NielsRogge
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21695/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21695",
"html_url": "https://github.com/huggingface/transformers/pull/21695",
"diff_url": "https://github.com/huggingface/transformers/pull/21695.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21695.patch",
"merged_at": 1680532356000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21694
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21694/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21694/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21694/events
|
https://github.com/huggingface/transformers/pull/21694
| 1,590,829,544
|
PR_kwDOCUB6oc5KSrGw
| 21,694
|
Apply ruff flake8-comprehensions
|
{
"login": "Skylion007",
"id": 2053727,
"node_id": "MDQ6VXNlcjIwNTM3Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Skylion007",
"html_url": "https://github.com/Skylion007",
"followers_url": "https://api.github.com/users/Skylion007/followers",
"following_url": "https://api.github.com/users/Skylion007/following{/other_user}",
"gists_url": "https://api.github.com/users/Skylion007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Skylion007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Skylion007/subscriptions",
"organizations_url": "https://api.github.com/users/Skylion007/orgs",
"repos_url": "https://api.github.com/users/Skylion007/repos",
"events_url": "https://api.github.com/users/Skylion007/events{/privacy}",
"received_events_url": "https://api.github.com/users/Skylion007/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Seems like the two failing tests are flake? One of the failing tests seems unable to import torch for some reason.",
"Yes the two test failures are irrelevant (this is a test we normally launch on its own in SageMaker, not with the test suites).\r\nThanks a lot for applying this, the result is a lot nicer!"
] | 1,676
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21693
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Pinging @sgugger since this just enables additional checks in ruff and improves code quality. All the flake8-comprehensions checks are only included in the plugin if they demonstrably increase readability and perf.
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21694/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21694",
"html_url": "https://github.com/huggingface/transformers/pull/21694",
"diff_url": "https://github.com/huggingface/transformers/pull/21694.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21694.patch",
"merged_at": 1677053694000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21693
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21693/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21693/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21693/events
|
https://github.com/huggingface/transformers/issues/21693
| 1,590,828,698
|
I_kwDOCUB6oc5e0h6a
| 21,693
|
Enable flake8-comprehension ruff checks
|
{
"login": "Skylion007",
"id": 2053727,
"node_id": "MDQ6VXNlcjIwNTM3Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Skylion007",
"html_url": "https://github.com/Skylion007",
"followers_url": "https://api.github.com/users/Skylion007/followers",
"following_url": "https://api.github.com/users/Skylion007/following{/other_user}",
"gists_url": "https://api.github.com/users/Skylion007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Skylion007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Skylion007/subscriptions",
"organizations_url": "https://api.github.com/users/Skylion007/orgs",
"repos_url": "https://api.github.com/users/Skylion007/repos",
"events_url": "https://api.github.com/users/Skylion007/events{/privacy}",
"received_events_url": "https://api.github.com/users/Skylion007/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,676
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### Feature request
* Ruff was recently added as a linter to huggingface transformers. It provides support out of the box for flake8-comprehension checks which improve list/set/dict comprehensions in Python and make them more readable and faster. Additionally, ruff has autofixits for all these rules so applying them automatically to the codebase is really straightforward and should improve the style of the codebase and make it more readable.
### Motivation
Better Faster / Improved Code
### Your contribution
Applying flake8-comprehension linter to library and enabling it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21693/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21692
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21692/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21692/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21692/events
|
https://github.com/huggingface/transformers/issues/21692
| 1,590,821,568
|
I_kwDOCUB6oc5e0gLA
| 21,692
|
RAG: Which version of pytorch-forecasting to use for fintetune-rag.sh?
|
{
"login": "Mrs-Hudson",
"id": 7013661,
"node_id": "MDQ6VXNlcjcwMTM2NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7013661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mrs-Hudson",
"html_url": "https://github.com/Mrs-Hudson",
"followers_url": "https://api.github.com/users/Mrs-Hudson/followers",
"following_url": "https://api.github.com/users/Mrs-Hudson/following{/other_user}",
"gists_url": "https://api.github.com/users/Mrs-Hudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mrs-Hudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mrs-Hudson/subscriptions",
"organizations_url": "https://api.github.com/users/Mrs-Hudson/orgs",
"repos_url": "https://api.github.com/users/Mrs-Hudson/repos",
"events_url": "https://api.github.com/users/Mrs-Hudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mrs-Hudson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Looks like 1.6.0 worked",
"The requirement file states `pytorch-lightning >= 1.5.10`. Were you using an older version or a newer version? \r\n(shoud I pin version 1.6.0 as the max version?)",
"I can confirm that 1.9.1 didn't work, neither did 1.6.4. haven't tried\n1.6.1-1.6.3\n\nOn Mon, Feb 20, 2023, 01:17 Arthur ***@***.***> wrote:\n\n> The requirement file states pytorch-lightning >= 1.5.10. Were you using\n> an older version or a newer version?\n> (shoud I pin version 1.6.0 as the max version?)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/21692#issuecomment-1436603658>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABVQKHJ2YDESVJCITYJPPXDWYMZCTANCNFSM6AAAAAAVBEDYJU>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-3.10.0-1160.83.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
The latest version of pytorch-forecasting is not comptabible with the RAG module of transformers. I was able to successfuly create my own knowledge dataset but the finetune-rag.py step fails. I also looked at the version history and experimented with 1.6.4 throws an error
I was wondering if there's a known version of pytorch_lightning that works with RAG.
Steps to Reproduce:
1. From transformers root, run a shell script with
`python examples/research_projects/rag/finetune_rag.py --data_dir /home/rparik/linkedInEmpForKnowledgeClean/data.csv \
--output_dir ./finetune_rag_output/ \
--model_name_or_path facebook/rag-token-nq \
--model_type rag_sequence \
--fp16 \
--gpus 1 \
--index_name custom \
--passages_path /home/rparik/projects/rag/transformers/linkKnowledgeBase/my_knowledge_dataset \
--index_path /home/rparik/projects/rag/transformers/linkKnowledgeBase/my_knowledge_dataset_hnsw_index.faiss \
`
Error:
```
File "examples/research_projects/rag/finetune_rag.py", line 17, in <module>
from pytorch_lightning.plugins.training_type import DDPStrategy
ModuleNotFoundError: No module named 'pytorch_lightning.plugins.training_type'
```
### Expected behavior
Script runs without errors
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21692/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21691
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21691/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21691/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21691/events
|
https://github.com/huggingface/transformers/pull/21691
| 1,590,801,621
|
PR_kwDOCUB6oc5KSlzS
| 21,691
|
Arijitx/wav2vec2 alignment
|
{
"login": "Marco071086",
"id": 88912522,
"node_id": "MDQ6VXNlcjg4OTEyNTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/88912522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Marco071086",
"html_url": "https://github.com/Marco071086",
"followers_url": "https://api.github.com/users/Marco071086/followers",
"following_url": "https://api.github.com/users/Marco071086/following{/other_user}",
"gists_url": "https://api.github.com/users/Marco071086/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Marco071086/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Marco071086/subscriptions",
"organizations_url": "https://api.github.com/users/Marco071086/orgs",
"repos_url": "https://api.github.com/users/Marco071086/repos",
"events_url": "https://api.github.com/users/Marco071086/events{/privacy}",
"received_events_url": "https://api.github.com/users/Marco071086/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
<img width="239" alt="result_igf" src="https://user-images.githubusercontent.com/88912522/219967634-5a2a0d23-c878-47e4-9278-058b1d6d44f1.png">
[WAUKEAFM2CA132599#](revert-1-Marco071086-patch-1) What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21691/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21691",
"html_url": "https://github.com/huggingface/transformers/pull/21691",
"diff_url": "https://github.com/huggingface/transformers/pull/21691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21691.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21690
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21690/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21690/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21690/events
|
https://github.com/huggingface/transformers/issues/21690
| 1,590,694,672
|
I_kwDOCUB6oc5e0BMQ
| 21,690
|
save_vocabulary() got an unexpected keyword argument 'filename_prefix'
|
{
"login": "szarki9",
"id": 55212433,
"node_id": "MDQ6VXNlcjU1MjEyNDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/55212433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szarki9",
"html_url": "https://github.com/szarki9",
"followers_url": "https://api.github.com/users/szarki9/followers",
"following_url": "https://api.github.com/users/szarki9/following{/other_user}",
"gists_url": "https://api.github.com/users/szarki9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/szarki9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szarki9/subscriptions",
"organizations_url": "https://api.github.com/users/szarki9/orgs",
"repos_url": "https://api.github.com/users/szarki9/repos",
"events_url": "https://api.github.com/users/szarki9/events{/privacy}",
"received_events_url": "https://api.github.com/users/szarki9/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The tokenizer you are passing to the `Trainer` does not look like it comes from the Transformers library and the reason you are getting the error is that its `save_pretrained` method doesn't look like it works. Just remove the line `tokenizer=tokenizer` in the creation of the `Seq2SeqTrainer` and you should be able to train.",
"Thanks so much for a prompt response, indeed that was the issue :) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
### System Info
hi,
Im trying to fine-tune T5ForConditionalGeneration model using trainer.train(). Im getting the following error:
```
Saving model checkpoint to models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500
Configuration saved in models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500/config.json
Model weights saved in models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500/pytorch_model.bin
tokenizer config file saved in models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500/tokenizer_config.json
Special tokens file saved in models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500/special_tokens_map.json
added tokens file saved in models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500/added_tokens.json
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [27], in <module>
----> 1 trainer.train()
File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1521, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1516 self.model_wrapped = self.model
1518 inner_training_loop = find_executable_batch_size(
1519 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1520 )
-> 1521 return inner_training_loop(
1522 args=args,
1523 resume_from_checkpoint=resume_from_checkpoint,
1524 trial=trial,
1525 ignore_keys_for_eval=ignore_keys_for_eval,
1526 )
File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1840, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1837 self.state.epoch = epoch + (step + 1) / steps_in_epoch
1838 self.control = self.callback_handler.on_step_end(args, self.state, self.control)
-> 1840 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
1841 else:
1842 self.control = self.callback_handler.on_substep_end(args, self.state, self.control)
File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2069, in Trainer._maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval)
2066 self._report_to_hp_search(trial, self.state.global_step, metrics)
2068 if self.control.should_save:
-> 2069 self._save_checkpoint(model, trial, metrics=metrics)
2070 self.control = self.callback_handler.on_save(self.args, self.state, self.control)
File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2141, in Trainer._save_checkpoint(self, model, trial, metrics)
2138 self.store_flos()
2140 output_dir = os.path.join(run_dir, checkpoint_folder)
-> 2141 self.save_model(output_dir, _internal_call=True)
2142 if self.deepspeed:
2143 # under zero3 model file itself doesn't get saved since it's bogus! Unless deepspeed
2144 # config `stage3_gather_16bit_weights_on_model_save` is True
2145 self.deepspeed.save_checkpoint(output_dir)
File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2631, in Trainer.save_model(self, output_dir, _internal_call)
2628 self.deepspeed.save_checkpoint(output_dir)
2630 elif self.args.should_save:
-> 2631 self._save(output_dir)
2633 # Push to the Hub when `save_model` is called by the user.
2634 if self.args.push_to_hub and not _internal_call:
File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2685, in Trainer._save(self, output_dir, state_dict)
2683 self.model.save_pretrained(output_dir, state_dict=state_dict)
2684 if self.tokenizer is not None:
-> 2685 self.tokenizer.save_pretrained(output_dir)
2687 # Good practice: save your training arguments together with the trained model
2688 torch.save(self.args, os.path.join(output_dir, TRAINING_ARGS_NAME))
File /opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:2132, in PreTrainedTokenizerBase.save_pretrained(self, save_directory, legacy_format, filename_prefix, push_to_hub, **kwargs)
2128 logger.info(f"Special tokens file saved in {special_tokens_map_file}")
2130 file_names = (tokenizer_config_file, special_tokens_map_file)
-> 2132 save_files = self._save_pretrained(
2133 save_directory=save_directory,
2134 file_names=file_names,
2135 legacy_format=legacy_format,
2136 filename_prefix=filename_prefix,
2137 )
2139 if push_to_hub:
2140 self._upload_modified_files(
2141 save_directory, repo_id, files_timestamps, commit_message=commit_message, token=token
2142 )
File /opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:2176, in PreTrainedTokenizerBase._save_pretrained(self, save_directory, file_names, legacy_format, filename_prefix)
2173 f.write(out_str)
2174 logger.info(f"added tokens file saved in {added_tokens_file}")
-> 2176 vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix)
2178 return file_names + vocab_files + (added_tokens_file,)
TypeError: save_vocabulary() got an unexpected keyword argument 'filename_prefix'
```
So Im not sure what's happening exactly, especially that this argument is Optional.
Im using tokenizers==0.12.1 & transformers==4.22.0
I would appreciate any help!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Im using the model published here: https://github.com/HelloJocelynLu/t5chem
```
from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer
batch_size = 16
model_name = model_checkpoint.split("/")[-1]
model_path = "/home/jovyan/workbench-shared-folder/retro-syn/models/pretrain/simple/"
model = T5ForConditionalGeneration.from_pretrained(model_path)
tokenizer = SimpleTokenizer(vocab_file=model_path + 'vocab.pt')
args = Seq2SeqTrainingArguments(
f"models/{model_name}-finetuned-T5CHEM-to-SSRT",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=1,
predict_with_generate=True,
fp16=True,
push_to_hub=False,
)
train_path_source = 'data/USPTO_50k/train.source'
train_path_target = 'data/USPTO_50k/train.target'
valid_path_source = 'data/USPTO_50k/val.source'
valid_path_target = 'data/USPTO_50k/val.target'
train_data = pd.read_csv(train_path_source, header = None)
train_data = train_data.rename(columns = {0:'product'})
train_data_reactant = pd.read_csv(train_path_target, header = None)
train_data_reactant = train_data_reactant.rename(columns = {0:'reactant'})
train_dataset = pd.concat([train_data, train_data_reactant], axis = 1)
train_dataset.head(3)
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
max_input_length = 128
max_target_length = 128
task_type2 = "Reactants:"
def preprocess_function(train_dataset):
inputs = [task_type2 + ex for ex in train_dataset["product"]]
targets = [ex for ex in train_dataset["reactant"]]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True, padding=True, return_tensors='pt')#.squeeze(0)
print(model_inputs)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(targets, max_length=max_target_length, truncation=True, padding=True, return_tensors='pt')
#print(labels.shape)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
import datasets
raw_datasets = datasets.DatasetDict({'train': datasets.Dataset.from_dict(train_dataset), 'val': datasets.Dataset.from_dict(valid_dataset)})
tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)
from transformers.trainer_utils import PredictionOutput
from typing import Dict, List, NamedTuple
def AccuracyMetrics(model_output: PredictionOutput) -> Dict[str, float]:
label_ids: np.ndarray = model_output.label_ids # type: ignore
predictions: np.ndarray = model_output.predictions.reshape(-1, label_ids.shape[1]) # type: ignore
correct: int = np.all(predictions==label_ids, 1).sum()
return {'accuracy': correct/len(predictions)}
tokenized_datasets = tokenized_datasets.remove_columns(['product', 'reactant', 'token_type_ids'])
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_datasets['train'],
eval_dataset=tokenized_datasets['val'],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=AccuracyMetrics,
)
trainer.train()
```
### Expected behavior
I would expect the model to be trained without errors.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21690/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.