url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/21286
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21286/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21286/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21286/events
|
https://github.com/huggingface/transformers/issues/21286
| 1,555,436,734
|
I_kwDOCUB6oc5cthS-
| 21,286
|
Add metric_key_prefix from training_args
|
{
"login": "marctorsoc",
"id": 22045779,
"node_id": "MDQ6VXNlcjIyMDQ1Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/22045779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marctorsoc",
"html_url": "https://github.com/marctorsoc",
"followers_url": "https://api.github.com/users/marctorsoc/followers",
"following_url": "https://api.github.com/users/marctorsoc/following{/other_user}",
"gists_url": "https://api.github.com/users/marctorsoc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marctorsoc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marctorsoc/subscriptions",
"organizations_url": "https://api.github.com/users/marctorsoc/orgs",
"repos_url": "https://api.github.com/users/marctorsoc/repos",
"events_url": "https://api.github.com/users/marctorsoc/events{/privacy}",
"received_events_url": "https://api.github.com/users/marctorsoc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sgugger ",
"This seems like a very niche feature which can be achieved by customizing your callback (you can use your own instead of the default ones).",
"@sgugger could you elaborate a bit more about what callback(s)? I guess I have to remove one and add mine?\r\n\r\n(oh sorry, I just found https://huggingface.co/docs/transformers/main_classes/callback)"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### Feature request
Today, if we create a `Trainer` as in
```
trainer = Trainer(
model=self.model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=data_cls["train"],
eval_dataset=data_cls["develop"],
compute_metrics=partial(
compute_metrics,
fbeta_beta=self.config.early_stopping.fbeta_beta,
),
data_collator=collate_chunks, # type: ignore
callbacks=callbacks, # type: ignore
)
```
and do `trainer.train()`:
1. There's no way to change the prefixes for the `train` dataset metrics (at least that I'm aware of)
2. One can change the prefix for the evaluation dataset from the default `eval/` into anything by changing the above into
```
prefix = "other"
Trainer(
....
eval_dataset={prefix: data_cls["develop"]},
...
)
```
However, doing this creates `eval/other_accuracy` due to the way [rewrite_logs](https://github.com/huggingface/transformers/blob/e2e393c6f25205739b5dc9fddd460d7bfab85150/src/transformers/integrations.py#L540) works. Ideally I'd like it to be `other/accuracy`.
My request is to have a clear way in the training_args to add any prefixes to the metrics, either for train or eval datasets.
### Motivation
I want to train multiple models within the same wandb run. As things stand right now, the metrics clash
### Your contribution
I've contributed to other OS projects but I'm not very familiar with the codebase to do the contribution directly. If someone guides me around with a high-level description, happy to do it myself
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21286/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21285
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21285/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21285/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21285/events
|
https://github.com/huggingface/transformers/issues/21285
| 1,555,435,363
|
I_kwDOCUB6oc5ctg9j
| 21,285
|
`trainer.predict(dataset)` drops samples when used with multi-gpu setup
|
{
"login": "fgbelidji",
"id": 32633752,
"node_id": "MDQ6VXNlcjMyNjMzNzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/32633752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fgbelidji",
"html_url": "https://github.com/fgbelidji",
"followers_url": "https://api.github.com/users/fgbelidji/followers",
"following_url": "https://api.github.com/users/fgbelidji/following{/other_user}",
"gists_url": "https://api.github.com/users/fgbelidji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fgbelidji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fgbelidji/subscriptions",
"organizations_url": "https://api.github.com/users/fgbelidji/orgs",
"repos_url": "https://api.github.com/users/fgbelidji/repos",
"events_url": "https://api.github.com/users/fgbelidji/events{/privacy}",
"received_events_url": "https://api.github.com/users/fgbelidji/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The model used in this notebook (`Roberta0`) does not follow the requirements of the `Trainer` as described [here](https://huggingface.co/docs/transformers/main_classes/trainer) (see the big square in red). The output of the model should be a tuple, a dictionary or a `ModelOutput`, but it can't be a simple tensor. This is the reason for the problem seen.",
"Thanks for your quick response @sgugger "
] | 1,674
| 1,674
| 1,674
|
MEMBER
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@sgguger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When running on a multi-gpu setup, the output of `trainer.predict()` misses samples even if the parameter `dataloader_drop_last` is set to `False` in the `TrainingArguments`.
[Here is an example colab notebook](https://drive.google.com/file/d/1pvnDkAhkMjLkUZVvindsFqhA_DRLveUk/view?usp=sharing) that could be run on a multi-gpu instance to reproduce the issue.
As an example, I had 2 gpus on my machine, my dataset had a size of 25000 samples, batch size per device was set to 256 (then 512 in total). When calling `trainer.predict()`, the output I got had a size of 24951.
The number of missing samples (25000-24951= 49) corresponds to the size of the dataloader (output of `len(trainer.get_test_dataloader()`)
### Expected behavior
Output of `trainer.predict(dataset)` should be the same length as dataset
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21285/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21284
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21284/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21284/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21284/events
|
https://github.com/huggingface/transformers/pull/21284
| 1,555,399,724
|
PR_kwDOCUB6oc5IcqkA
| 21,284
|
Update expected values for doctest
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
MEMBER
| null |
Manually update expected values from hardware/environment differences for doctest on `task_summary.mdx`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21284/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21284",
"html_url": "https://github.com/huggingface/transformers/pull/21284",
"diff_url": "https://github.com/huggingface/transformers/pull/21284.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21284.patch",
"merged_at": 1674595952000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21283
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21283/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21283/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21283/events
|
https://github.com/huggingface/transformers/pull/21283
| 1,555,357,611
|
PR_kwDOCUB6oc5IchpB
| 21,283
|
[examples/deepspeed] fix renamed api
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Fixing the breakages caused by https://github.com/huggingface/transformers/pull/21155
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21283/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21283/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21283",
"html_url": "https://github.com/huggingface/transformers/pull/21283",
"diff_url": "https://github.com/huggingface/transformers/pull/21283.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21283.patch",
"merged_at": 1674582874000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21282
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21282/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21282/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21282/events
|
https://github.com/huggingface/transformers/pull/21282
| 1,555,328,275
|
PR_kwDOCUB6oc5Icbb3
| 21,282
|
[GIT] Add test for batched generation
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Failing test is unrelated, merging."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Related to #21087, I've added a test for batched generation with GIT.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21282/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21282",
"html_url": "https://github.com/huggingface/transformers/pull/21282",
"diff_url": "https://github.com/huggingface/transformers/pull/21282.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21282.patch",
"merged_at": 1674638059000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21281
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21281/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21281/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21281/events
|
https://github.com/huggingface/transformers/pull/21281
| 1,555,319,213
|
PR_kwDOCUB6oc5IcZhl
| 21,281
|
[`t5`] Fix T5 inference in `float16` + `bnb` error
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Currently on the `main` branch, the inference of `t5` is broken in half-precision.
With the introduction of `_keep_in_fp32_modules` attributes in https://github.com/huggingface/transformers/pull/20683 , `wo` layers needs to be upcasted in `float32` for more accurate inference.
It appears that in the aforementioned PR, we forgot to apply the same fix in `T5DenseActDense` layers, leading into a broken inference API when running inference in fp16 for models that uses `T5DenseActDense` layers instead of `T5DenseGatedActDense` layers. This can be reproduced for example with `t5-small`:
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
model_id = "t5-small"
model = T5ForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.float16)
tokenizer = T5Tokenizer.from_pretrained(model_id)
input_tokens = tokenizer.encode("Translate the following in German: My name is Younes.", return_tensors="pt").to("cuda")
output = model.generate(input_tokens, max_length=32, num_beams=4, early_stopping=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
This is not the case for `flan` family since `flan-t5` uses `T5DenseGatedActDense` that has been correctly fixed in https://github.com/huggingface/transformers/pull/20760
This PR adds also a fix for 8-bit models. If a user wants to run inference in 8-bit without `_keep_in_fp32_modules` for backward compatibility reasons:
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
T5ForConditionalGeneration._keep_in_fp32_modules = None
model_id = "google/flan-t5-small"
model = T5ForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.float16)
tokenizer = T5Tokenizer.from_pretrained(model_id)
input_tokens = tokenizer.encode("Translate the following in German: My name is Younes.", return_tensors="pt").to("cuda")
output = model.generate(input_tokens, max_length=32, num_beams=4, early_stopping=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
They face an issue that is hard to interpret:
```
"addmm_cuda" not implemented for 'Char'
```
See for example: https://github.com/TimDettmers/bitsandbytes/issues/111#issuecomment-1368952450
This is because the `hidden_states` are converted in `int8` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L313-L314), leading to entering a linear layer with an 8bit input, which leads to the error. Therefore one should cast to `self.wo.weight.dtype` only if `dtype != torch.int8`. (pointed out also [here](https://github.com/TimDettmers/bitsandbytes/issues/111#issuecomment-1402113167) )
This PR applies also `make fix-copies` introducing the fix to other architectures too - happy to revert that since this issue is only relevant for `T5` (LongT5 etc does not have `_keep_in_fp32_modules`)
This PR also tests everything, making sure this will never happen again!
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21281/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21281/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21281",
"html_url": "https://github.com/huggingface/transformers/pull/21281",
"diff_url": "https://github.com/huggingface/transformers/pull/21281.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21281.patch",
"merged_at": 1674580479000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21280
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21280/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21280/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21280/events
|
https://github.com/huggingface/transformers/issues/21280
| 1,555,023,156
|
I_kwDOCUB6oc5cr8U0
| 21,280
|
Does "kwargs" actually work in from_pretrained of Tokenizer?
|
{
"login": "pojurer",
"id": 56473157,
"node_id": "MDQ6VXNlcjU2NDczMTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/56473157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pojurer",
"html_url": "https://github.com/pojurer",
"followers_url": "https://api.github.com/users/pojurer/followers",
"following_url": "https://api.github.com/users/pojurer/following{/other_user}",
"gists_url": "https://api.github.com/users/pojurer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pojurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pojurer/subscriptions",
"organizations_url": "https://api.github.com/users/pojurer/orgs",
"repos_url": "https://api.github.com/users/pojurer/repos",
"events_url": "https://api.github.com/users/pojurer/events{/privacy}",
"received_events_url": "https://api.github.com/users/pojurer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
Colab
transformers==4.25.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Description of the parameters for the method **from_pretrained** states:
```
kwargs (additional keyword arguments, *optional*):
Will be passed to the Tokenizer `__init__()` method. Can be used to set special tokens like
`bos_token`, `eos_token`, `unk_token`, `sep_token`, `pad_token`, `cls_token`, `mask_token`,
`additional_special_tokens`. See parameters in the `__init__()` for more details.
```
However, the following code does not change parameters of the tokenizer:
```
tokenizer = transformers.AutoTokenizer.from_pretrained(
'bert-base-uncased',
kwargs={'model_max_length': 777, 'cls_token': '[CLASS]'}
)
print(tokenizer.model_max_length)
print(tokenizer.cls_token)
```
Output:
```
512
[CLS]
```
Why is that?
### Expected behavior
The output I want to see is:
```
777
[CLASS]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21280/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21279
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21279/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21279/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21279/events
|
https://github.com/huggingface/transformers/issues/21279
| 1,555,011,862
|
I_kwDOCUB6oc5cr5kW
| 21,279
|
How to use pre-trained BERT or GPT transformers for CNN based video captioning
|
{
"login": "adeljalalyousif",
"id": 97432157,
"node_id": "U_kgDOBc6yXQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97432157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adeljalalyousif",
"html_url": "https://github.com/adeljalalyousif",
"followers_url": "https://api.github.com/users/adeljalalyousif/followers",
"following_url": "https://api.github.com/users/adeljalalyousif/following{/other_user}",
"gists_url": "https://api.github.com/users/adeljalalyousif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adeljalalyousif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adeljalalyousif/subscriptions",
"organizations_url": "https://api.github.com/users/adeljalalyousif/orgs",
"repos_url": "https://api.github.com/users/adeljalalyousif/repos",
"events_url": "https://api.github.com/users/adeljalalyousif/events{/privacy}",
"received_events_url": "https://api.github.com/users/adeljalalyousif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nI'd recommend checking out the [GIT](https://huggingface.co/docs/transformers/main/en/model_doc/git) model which was just added to the library, as it's the first one in this library that can be used for video captioning. Check out the demo notebook [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Inference_with_GIT_for_image_video_captioning_and_image_video_QA.ipynb).\r\n\r\nThe model is a GPT-like model conditioned on both images and text to predict the next text tokens.",
"Closing this as it seems resolved.",
"Thank you so much"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
Hello, How to use pre-trained BERT or GPT transformers for video captioning task using CNN features not vision transformer
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21279/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21278
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21278/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21278/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21278/events
|
https://github.com/huggingface/transformers/pull/21278
| 1,554,887,303
|
PR_kwDOCUB6oc5Ia8Ql
| 21,278
|
Hotifx remove tuple for git config image processor.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Just a question, for [this model](https://huggingface.co/microsoft/git-base-vatex), which uses `VideoMAEImageProcessor`, will the following then always instantiate a `CLIPImageProcessor`?\r\n```\r\nfrom transformers import AutoProcessor, VideoMAEImageProcessor\r\n\r\nprocessor = AutoProcessor.from_pretrained(\"microsoft/git-base-vatex\")\r\nassert isinstance(processor.image_processor, VideoMAEImageProcessor)\r\n``` ",
"> Just a question, for [this model](https://huggingface.co/microsoft/git-base-vatex), which uses `VideoMAEImageProcessor`, will the following then always instantiate a `CLIPImageProcessor`?\r\n> \r\n> ```\r\n> from transformers import AutoProcessor, VideoMAEImageProcessor\r\n> \r\n> processor = AutoProcessor.from_pretrained(\"microsoft/git-base-vatex\")\r\n> assert isinstance(processor.image_processor, VideoMAEImageProcessor)\r\n> ```\r\n\r\nWhy not try it out ?\r\n\r\n```\r\ngh pr checkout 21278\r\npip install -e .\r\npython your_script.py\r\n```"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21278/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21278",
"html_url": "https://github.com/huggingface/transformers/pull/21278",
"diff_url": "https://github.com/huggingface/transformers/pull/21278.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21278.patch",
"merged_at": 1674572870000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21277
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21277/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21277/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21277/events
|
https://github.com/huggingface/transformers/pull/21277
| 1,554,658,515
|
PR_kwDOCUB6oc5IaKoF
| 21,277
|
[W2V2 with LM] Fix decoder test with params
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Indeed the var names are a bit confusing @ydshieh! They follow the same convention throughout the test file, where `decoded_processor` refers to the LM outputs decoded by the HF **processor** class, and `decoded_decoder` refers to the LM outputs decoded by the pyctc **decoder** class.\r\n\r\nI'll update these var names in a follow-up PR to make them a bit more intuitive 👍"
] | 1,674
| 1,687
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/pull/21226. This test started failing due to a PyPI update of pyctcdecode, which incorporated a number of bug fixes to the LM decode method (see https://github.com/kensho-technologies/pyctcdecode/issues/107#issuecomment-1400757049).
This PR modifies the decoding params, such that the same outputs are obtained for pyctcdecode v0.4.0 and v0.5.0 (latest) - hence the test should pass irrespective of the package version while still verifying correctness of the outputs. The PR also adds tests for the LM and logit scores.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21277/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21277",
"html_url": "https://github.com/huggingface/transformers/pull/21277",
"diff_url": "https://github.com/huggingface/transformers/pull/21277.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21277.patch",
"merged_at": 1674584877000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21276
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21276/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21276/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21276/events
|
https://github.com/huggingface/transformers/pull/21276
| 1,554,609,888
|
PR_kwDOCUB6oc5IaAPt
| 21,276
|
[Doc] fix broken link
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Fixes https://github.com/huggingface/transformers/issues/21275
I can confirm the link at least works in https://moon-ci-docs.huggingface.co/docs/transformers/pr_21276/en/main_classes/text_generation
cc @ydshieh 💯
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21276/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21276",
"html_url": "https://github.com/huggingface/transformers/pull/21276",
"diff_url": "https://github.com/huggingface/transformers/pull/21276.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21276.patch",
"merged_at": 1674555528000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21275
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21275/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21275/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21275/events
|
https://github.com/huggingface/transformers/issues/21275
| 1,554,534,134
|
I_kwDOCUB6oc5cqE72
| 21,275
|
Text generation link not working
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"will move this to `transformers`",
"Thanks for noticing! should be addressed in #21276"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
I'm not sure if this is the right place to notify such issues.
On [this page](https://huggingface.co/docs/transformers/main_classes/text_generation) the link in the second paragraph to "text generation strategies guide" does not work. It should point to: https://huggingface.co/docs/transformers/main/generation_strategies
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21275/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21274
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21274/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21274/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21274/events
|
https://github.com/huggingface/transformers/issues/21274
| 1,554,347,163
|
I_kwDOCUB6oc5cpXSb
| 21,274
|
Output `past_key_values` from `TextGenerationPipeline`.
|
{
"login": "gilljon",
"id": 113929785,
"node_id": "U_kgDOBspuOQ",
"avatar_url": "https://avatars.githubusercontent.com/u/113929785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gilljon",
"html_url": "https://github.com/gilljon",
"followers_url": "https://api.github.com/users/gilljon/followers",
"following_url": "https://api.github.com/users/gilljon/following{/other_user}",
"gists_url": "https://api.github.com/users/gilljon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gilljon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gilljon/subscriptions",
"organizations_url": "https://api.github.com/users/gilljon/orgs",
"repos_url": "https://api.github.com/users/gilljon/repos",
"events_url": "https://api.github.com/users/gilljon/events{/privacy}",
"received_events_url": "https://api.github.com/users/gilljon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"That's interesting, the current pipeline does not support chunking indeed. However, I think adding this would not be really hard cc @Narsil, would go in the `generate_kwargs`, only issue is that it is not going out. ",
"That would be nice, but requires pretty much changing `generate` upside down and inside out.\r\n\r\nThis is what we have done here: https://github.com/huggingface/text-generation-inference which was required to get max performance out of bloom.\r\n\r\nHowever, this is a pretty large endeavor which would mean the pipeline would basically redo the entire `generate` 's job.\r\nSince `generate` is already quite complex, I'm hesitant to start such a thing.\r\n\r\n> Runtime seems to skyrocket when streaming the results in pipeline using chunks. I believe this is due to the fact that we waste time having to recalculate past_key_values every time we make a call to pipeline().\r\n\r\nWhen you're generating, you shouldn't have to care about the leftmost part of a text, it will be ignored all the time, usually text generation models simply chunk the left most part of the text.\r\n\r\nIsnt' that doable in your case ? Do you mind showing a script of what you're attempting to do ? This might help better understand what you're trying to achieve, and what are the possible options.\r\n",
"@Narsil thanks for the response! Here is an example of what I'd like to be able to do:\r\n```python\r\ndef stream_inference(input_dict):\r\n text = input_dict[\"text_inputs\"]\r\n chunk_size = input_dict.pop(\"chunk_size\", 10)\r\n for _ in range(10):\r\n generated_text = pipeline(text, max_new_tokens=chunk_size, use_cache=True)[0][\"generated_text\"]\r\n yield generated_text\r\n text += generated_text\r\n```\r\n\r\nWhat I've observed is that although we set `use_cache=True`, there is still the overhead of re-calculating the past_key_values every time we call `pipeline()` since it has been exited. Ideally, if we could extract `past_key_values` from the output of pipeline, then we could feed that back in the successive calls to address this issue.\r\n\r\nThoughts?\r\n",
"Pipeline is stateless, so it cannot keep the `past_key_values` and for you to send it again and again kind of defeats the purpose of a pipeline imo (since you can't batch anymore for starters, in general you're introducing some kind of state).\r\n\r\nI can provide a script which *kind* of mimic what you want to do, it is pretty hacky, but the \"clean\" version is exactly how I said, it would need a major rewrite of some components.\r\n\r\nhttps://github.com/huggingface/transformers/issues/17365#issuecomment-1152192715\r\n\r\nHere is the adapted version without threading (which you should avoid if possible):\r\n\r\n```python\r\nfrom transformers import pipeline\r\nimport torch\r\nimport threading\r\nfrom transformers.generation.stopping_criteria import StoppingCriteria, StoppingCriteriaList\r\nfrom queue import Queue\r\n\r\n\r\npipe = pipeline(model=\"gpt2\", task=\"text-generation\", device=0)\r\n\r\n\r\nclass Stream(StoppingCriteria):\r\n def __init__(self):\r\n self.prev_string = \"\"\r\n\r\n def __call__(self, input_ids, scores) -> bool:\r\n string = pipe.tokenizer.decode(input_ids[0])\r\n # print(f\"Total: {repr(string)}\")\r\n print(f\"New: {repr(string[len(self.prev_string):])}\")\r\n self.prev_string = string\r\n return False\r\n\r\n\r\nfor out in pipe(\"My initial text\", max_new_tokens=10, stopping_criteria=[Stream()]):\r\n print(\"Final result\", out)\r\n```\r\n\r\nDoes this work for you ?",
"@OlivierDehaene Tagging just because we were talking about the stream process in `text-generation-inference` :)",
"@Narsil Hmm, this does not address the issue of having to re-calculate `past_key_values` though between successive calls of `pipe()`, no?",
"Oh no that cannot change. But the idea, is that you can call it for a very long range (like `max_new_tokens=100`) which will use the past_key_values over and over without you having to deal with it. And you can still capture tokens as they are produced to send them to a viewer (here the stdout).\r\n\r\nDoing anything with `past_key_values` at the pipeline level, is IMO too advanced for what pipelines are supposed to be. As it will break batching (which you most likely don't care about since you seem to be generating things live, but it's still a constraint on the `pipeline` itself).\r\nThe main goal of pipelines is to be useable by non-ML software engineers, past_key_values do require you to understand in quite a lot of details how things work internally. That's why IMO it's out of scope for `pipeline`.\r\n\r\nIf you really want full control, for instance to get resumable inference, you have to go at a lower level than the pipeline IMO.\r\nThe code is not going to be so bad if you don't have batching to deal with\r\nA gist:\r\n\r\n```python\r\n\r\ninput_ids = tokenizer.encode(\"intiial string\")\r\nstopping_criteria = StoppingCriteriaList([EOSToken, etc...])\r\nlogits_processor = LogitsProcessorList[...]) # <--- For both of these, check out `generate` on what are those options and how to create them).\r\npast_key_values = None\r\nscores = None\r\n\r\nwhile not stopping_criteria(input_ids, scores)\r\n outputs = model.forward(input_ids, past_key_values)\r\n past_key_values = outputs.past_key_values\r\n logits = outputs.logits.softmax(dim=-1)\r\n scores = logits_processor(logits)\r\n input_ids = logits.argmax(dim=-1) # <---- choose whatever sampling strategy makes most sense\r\n```\r\n\r\nThe code is not meant to be functional, but the end result should look something like it.\r\n\r\nSince your problem space is likely to be simpler than the general `transformers` one, you can probably get rid of a sizeable chunk of complexity that we have to deal with, for beam_search, specific models, legacy code, batching, which don't really matter as much for you.\r\n\r\n\r\n\r\n```",
"@Narsil Nice, I see what you are saying. Just for my own understanding -- is Stopping Criteria called per token produced?",
"Yes, it's intended goal is to decide when to stop generating tokens (hence the return type, false means continue generating, true means stop, iteration will stop when ANY criteria wants to stop).",
"@Narsil Thanks so much!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
NONE
| null |
### Feature request
Currently, `TextGenerationPipeline` does not allow users to extract the `past_key_values` object from its output. It would be nice for us to be able to do so, so that we could then stream the intermediate text in chunks, whilst not having to recalculate the `past_key_values` after every time we yield.
### Motivation
Runtime seems to skyrocket when streaming the results in pipeline using chunks. I believe this is due to the fact that we waste time having to recalculate `past_key_values` every time we make a call to `pipeline()`.
### Your contribution
Would be happy to help review code!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21274/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21273
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21273/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21273/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21273/events
|
https://github.com/huggingface/transformers/pull/21273
| 1,554,198,632
|
PR_kwDOCUB6oc5IYo8t
| 21,273
|
Use `logger.info` instead of `print` to emit a logging message in `hub.py`
|
{
"login": "hkiyomaru",
"id": 13678589,
"node_id": "MDQ6VXNlcjEzNjc4NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/13678589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hkiyomaru",
"html_url": "https://github.com/hkiyomaru",
"followers_url": "https://api.github.com/users/hkiyomaru/followers",
"following_url": "https://api.github.com/users/hkiyomaru/following{/other_user}",
"gists_url": "https://api.github.com/users/hkiyomaru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hkiyomaru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hkiyomaru/subscriptions",
"organizations_url": "https://api.github.com/users/hkiyomaru/orgs",
"repos_url": "https://api.github.com/users/hkiyomaru/repos",
"events_url": "https://api.github.com/users/hkiyomaru/events{/privacy}",
"received_events_url": "https://api.github.com/users/hkiyomaru/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
I found a line that uses the `print` method to emit a logging message in `transformers/utils/hub.py`. When I was developing a CLI app that outputs results in a specific format to stdout, this line emitted a logging message to stdout, resulting in an error. This PR fixes the line to use the `logger.info` method to emit the message instead.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21273/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21273",
"html_url": "https://github.com/huggingface/transformers/pull/21273",
"diff_url": "https://github.com/huggingface/transformers/pull/21273.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21273.patch",
"merged_at": 1674574630000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21272
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21272/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21272/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21272/events
|
https://github.com/huggingface/transformers/issues/21272
| 1,553,853,569
|
I_kwDOCUB6oc5cneyB
| 21,272
|
Poor CPU Inference Scalability Possibly Due to Disk IO
|
{
"login": "jrsperry",
"id": 43385427,
"node_id": "MDQ6VXNlcjQzMzg1NDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/43385427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jrsperry",
"html_url": "https://github.com/jrsperry",
"followers_url": "https://api.github.com/users/jrsperry/followers",
"following_url": "https://api.github.com/users/jrsperry/following{/other_user}",
"gists_url": "https://api.github.com/users/jrsperry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jrsperry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jrsperry/subscriptions",
"organizations_url": "https://api.github.com/users/jrsperry/orgs",
"repos_url": "https://api.github.com/users/jrsperry/repos",
"events_url": "https://api.github.com/users/jrsperry/events{/privacy}",
"received_events_url": "https://api.github.com/users/jrsperry/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
NONE
| null |
### System Info
Environment:
Python: 3.10.8
Pytorch: 1.12.1
Transformers: 4.24.0
Model Used: cardiffnlp/twitter-roberta-base-sentiment
Tests Performed:
I created a simple docker image based on this [github repo](https://github.com/jrsperry/transformers-sentiment-test) which would make predictions on sentences and print out the time to make said predictions. I did this in a kubernetes cluster on aws with node types of c6i.8xlarge, m6a.4xlarge, and several others.
When running with only 1 pod replica with a limit of 4 cpu I got the expected performance. When I run multiple pods per node (more than 2) with the same limits set the performance per pod absolutely falls off a cliff. All the pods are still getting the same amount of cpu as the first test with 1 pod, and there's no clear ram pressure, but some of the performance per pod is 10 times slower than before. I've confirmed that all pods were getting the same amount of cpu (4) as well.
I ended up adding a higher class of storage to the kubernetes node to see if it was IO bound and my performance improved dramatically and was much closer to being inline with my expectations given what I observed with the single pod test.
In my test image I don't write to disk so I'm a bit perplexed as to where the disk pressure is coming from, whether it's expected, and if there's any way to optimize around it.
I've run many of these tests with different base images and slightly different transformers and pytorch versions and all of them suffered the same kind of performance drop off. If I provide the same amount of cpu to each pod, I would expect a similar amount of performance. It's possible this isn't IO based, it's just the only thing that brought performance back closer to expectations, although confusingly not in every situation, and varied in its impact. The environment that responded best to the ssd drive had python 3.10.8, torch 1.12.1 and transformers 4.24.0.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
- Run multiple instances of the sperry1996/transformers-sentiment-test:0.0.1 image on a kubernetes node and compare the results vs running 1 instance per node (with equal requests and limits set). Alternatively you can run multiple containers locally with the command `docker run --cpus="0.75" sperry1996/transformers-sentiment-test:0.0.1` (on an x86 machine) or `docker run --cpus="0.75" sperry1996/transformers-sentiment-test:0.0.1-CPU-MULTI` (on an arm machine). I experienced slowdowns of around 4x (when running more than 1 container) on an m1 mac with docker desktop and around 25-30% on a windows with docker desktop (mac docker desktop has worse disk io performance than windows, there may be other factors as well).
### Expected behavior
I would expect fairly linear scaling of the instances of the pods given they have equal requests and limits set.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21272/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21271
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21271/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21271/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21271/events
|
https://github.com/huggingface/transformers/issues/21271
| 1,553,850,489
|
I_kwDOCUB6oc5cneB5
| 21,271
|
issue warning about different batch size being used for --resume_from_checkpoint
|
{
"login": "tohara-PandoLogic",
"id": 108832763,
"node_id": "U_kgDOBnyn-w",
"avatar_url": "https://avatars.githubusercontent.com/u/108832763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tohara-PandoLogic",
"html_url": "https://github.com/tohara-PandoLogic",
"followers_url": "https://api.github.com/users/tohara-PandoLogic/followers",
"following_url": "https://api.github.com/users/tohara-PandoLogic/following{/other_user}",
"gists_url": "https://api.github.com/users/tohara-PandoLogic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tohara-PandoLogic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tohara-PandoLogic/subscriptions",
"organizations_url": "https://api.github.com/users/tohara-PandoLogic/orgs",
"repos_url": "https://api.github.com/users/tohara-PandoLogic/repos",
"events_url": "https://api.github.com/users/tohara-PandoLogic/events{/privacy}",
"received_events_url": "https://api.github.com/users/tohara-PandoLogic/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The examples presented in this repo are not feature-complete apps with ironclad error messages, but just that... examples. We keep them with as little code as possible so they can be easily understood and customized. That's why they won't contain warnings for everything that could go wrong.",
"OK, so this turns out to be a two part feature request:\r\n1. Issue warning\r\n2. Include more informative usage for --resume_from_checkpoint\r\n\r\nThe first can be written off as won't fix as per keeping it simple, but the second should be addressed because it is straightforward to do so (e.g., minimal risk).\r\n\r\nThese examples are critical for doing non-trivial tasks with Hugging Face (e.g., pretraining and fine-tuning). The accelerator-based scripts are particularly non-trivial, so fleshing them out a little will be beneficial.",
"Sylvain: can you provide some tips (e.g., relatively safe batch size adjustments)?\r\n\r\nCan you also un-hide the forum post I just made?\r\n> https://discuss.huggingface.co/t/resuming-accelerate-based-pretraining-with-different-batch-size/30845\r\n\r\nI put in a few links to the issue and code samples, so it got flagged as potential spam.\r\n\r\nBest,\r\nTom",
"@tohara-PandoLogic Your post is un-hidden since yesterday, I rejected the spam flag. I still don't understand why you are using `--resume_from_checkpoint` for two different training with different hyperparameters. You can start a new training from any checkpoint by passing the model folder in `--model_name_or_path`.\r\n",
"OK, thanks. I take it the seed needs to be changed as well.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,678
| 1,678
|
NONE
| null |
### Feature request
According to #7198 pretraining must always use the same batch size:
> [15 Dec 21]
> [bowen-n] When I resume training from a checkpoint, I use a new batch size different from the previous training and it seems that the number of the skipped epoch is wrong.
> ...
> [sgugger] That feature is not supported, you should resume training with the exact same hyperparameters or start a new training if you want to change them.
This should be made explicit in a warning. As is, the system seems to just hang up.
This occurred specifically with the following script:
> ./examples/pytorch/language-modeling/run_mlm_no_trainer.py
The following is a sketch of the new. behavior. Batch size is not recoded in the checkpoint, so additional support is required:
```
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
accelerator.load_state(args.resume_from_checkpoint)
path = os.path.basename(args.resume_from_checkpoint)
# Make sure same batch sizes uses
# TODO: add support for storing batch size in checkpoint
if accelerator...batch_size != args.per_device_eval_batch_size):
logger.warning("Different batch size specified: previous %d vs. current %s; result unpredictable.",
accelerator...batch_size, args.per_device_eval_batch_size),
```
Note that restarting is a not a practical solution given the time involved in pretraining. Given such consequences, it would be also be good for the argument description for --resume_from_checkpoint to warn about this limitation.
```
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder". \
" (n.b., using same hyperparameters in particular batch size).",
)
```
----------
Environment information:
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31
- Python version: 3.9.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Motivation
Again restarting is a not a practical solution given the time involved in pretraining.
Thus, the system should detect when two incompatible batch sizes are being used as well as mention this current restriction in the usage documentation. (It might not be apparent for users who have used other pretraining packages such such as Google [BERT](https://github.com/google-research/bert).)
Note that I had reviewed the script --help mentions of batch size before running the pretraining to make sure it was specified correctly. Thus I would have been alerted much sooner about this limitation, in particular before a costly pretraining run was made.
### Your contribution
I am more comfortable with the no_trainer-style scripts, having customized both the MLM and CLM ones as well as the code parrot example (i..e, under ./examples/research_projects/codeparrot). Therefore I could do the change outlined above as a PR once the support for recording the batch size is implemented.
I could also add a test case for this, but there only seems existing tests for the Trainer style script (e.g., run_mlm.py) as in tests/trainer/test_trainer.py.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21271/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21270
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21270/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21270/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21270/events
|
https://github.com/huggingface/transformers/pull/21270
| 1,553,800,669
|
PR_kwDOCUB6oc5IXTqA
| 21,270
|
Adding resource section to GPT-J docs
|
{
"login": "adit299",
"id": 43497982,
"node_id": "MDQ6VXNlcjQzNDk3OTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/43497982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adit299",
"html_url": "https://github.com/adit299",
"followers_url": "https://api.github.com/users/adit299/followers",
"following_url": "https://api.github.com/users/adit299/following{/other_user}",
"gists_url": "https://api.github.com/users/adit299/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adit299/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adit299/subscriptions",
"organizations_url": "https://api.github.com/users/adit299/orgs",
"repos_url": "https://api.github.com/users/adit299/repos",
"events_url": "https://api.github.com/users/adit299/events{/privacy}",
"received_events_url": "https://api.github.com/users/adit299/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hello,\r\n\r\nI have been currently working on finding resources for GPT-J, and mainly I have been using the links mentioned in #20055 and searching GPT-J in each of the links. I found a few links, but I feel this is not the best way to find the resources. Can you share some tips for how you were able to find more resources? @stevhliu \r\n\r\nWhat I have so far:\r\n\r\nGPT-J Description:\r\n- https://huggingface.co/EleutherAI/gpt-j-6B\r\n\r\nBlog Posts:\r\n- https://huggingface.co/blog/gptj-sagemaker\r\n- https://www.philschmid.de/gptj-deepspeed-inference\r\n\r\nNielsRogge's Transformers Tutorials:\r\n- https://github.com/kingoflolz/mesh-transformer-jax",
"Thanks for your work, that's a great start and I think you have most of them! You can also add:\r\n\r\n* This [GPT-J notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GPT-J-6B/Inference_with_GPT_J_6B.ipynb) from Niels Transformers Tutorials for inference.\r\n* A [chapter](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) in the Hugging Face Course for causal language modeling.\r\n* The example scripts and notebooks for causal language modeling and text generation (see the last three bullet points under the Resource section [here](https://huggingface.co/docs/transformers/model_doc/gpt2#resources) for GPT-2).",
"It looks like the formatting for the docs is still not correct..the bulletpoints are all jumbled up. Looking into this...",
"I have marked the pull request as ready to review 👍 @stevhliu "
] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds resources section to the GPT-J documents.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20055 (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @stevhliu @MKhalusova
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21270/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21270/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21270",
"html_url": "https://github.com/huggingface/transformers/pull/21270",
"diff_url": "https://github.com/huggingface/transformers/pull/21270.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21270.patch",
"merged_at": 1675115284000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21269
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21269/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21269/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21269/events
|
https://github.com/huggingface/transformers/pull/21269
| 1,553,792,823
|
PR_kwDOCUB6oc5IXR4k
| 21,269
|
[GenerationConfig] add additional kwargs handling
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Also will have to add test + this is apparently breaking a lot of things haha ",
"Okay, after talking a bit with @gante and testing, this is not the best, this PR will focus on other missing functionalities. Mostly addition of the `dict_torch_dtype_to_str` function, as the `dtype` could be passed to the generation 😉 \r\nThe problem is mostly that if we process all the additional kwargs, we are getting all of the arguments from the `configuration.json` which mixes things up. \r\nThe simplest solution is either to store them in `generate_kwargs` or re-write the configuration for the model. I though this was cumbersome but it is actually the most logical and cleanest way to do it. \r\n\r\nEDIT : gonna just add a condition, if the kwargs are from a config file, they are not added. ",
"Now only thing left is to add a pretty test with all the different edge cases I encountered."
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
This add the same support that we have in the `PretrainedConfig`, where additional kwargs are automaticallu updated.
This will allow users to re-use the `GenerationConfig` class for most of the use_cases, whithout having to add a model specific class. I was trying to load [the following `generation_config` ](https://huggingface.co/openai/whisper-small/discussions/10/files)and got half of my additional arguments deleted 😉
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21269/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21269",
"html_url": "https://github.com/huggingface/transformers/pull/21269",
"diff_url": "https://github.com/huggingface/transformers/pull/21269.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21269.patch",
"merged_at": 1674583483000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21268
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21268/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21268/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21268/events
|
https://github.com/huggingface/transformers/pull/21268
| 1,553,566,677
|
PR_kwDOCUB6oc5IWfjq
| 21,268
|
Supported pipeline tasks update
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
The docstring of the `transformers.pipeline` listed only 16 supported tasks while `SUPPORTED_TASKS` contains 24 tasks. This PR adds the missing tasks to the docstrings so that the generated reference docs accurately list all of the supported tasks here - https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21268/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21268",
"html_url": "https://github.com/huggingface/transformers/pull/21268",
"diff_url": "https://github.com/huggingface/transformers/pull/21268.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21268.patch",
"merged_at": 1674501800000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21267
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21267/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21267/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21267/events
|
https://github.com/huggingface/transformers/pull/21267
| 1,553,537,042
|
PR_kwDOCUB6oc5IWZFk
| 21,267
|
Remove CLI spams with Whisper FeatureExtractor
|
{
"login": "qmeeus",
"id": 25608944,
"node_id": "MDQ6VXNlcjI1NjA4OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qmeeus",
"html_url": "https://github.com/qmeeus",
"followers_url": "https://api.github.com/users/qmeeus/followers",
"following_url": "https://api.github.com/users/qmeeus/following{/other_user}",
"gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions",
"organizations_url": "https://api.github.com/users/qmeeus/orgs",
"repos_url": "https://api.github.com/users/qmeeus/repos",
"events_url": "https://api.github.com/users/qmeeus/events{/privacy}",
"received_events_url": "https://api.github.com/users/qmeeus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey, thanks for the contribution! I agree with you, wee should not save the filters as they just depend on the parameters with which they were created, which is why I would be in favor of simply adding the following : \r\n\r\n```python\r\n def to_dict(self) -> Dict[str, Any]:\r\n \"\"\"\r\n Serializes this instance to a Python dictionary.\r\n\r\n Returns:\r\n `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.\r\n \"\"\"\r\n output = copy.deepcopy(self.__dict__)\r\n output[\"feature_extractor_type\"] = self.__class__.__name__\r\n if \"mel_filters\" in output:\r\n del output[\"mel_filters\"]\r\n return output\r\n```\r\nAlso cc @sanchit-gandhi this seems very logitcal to me",
"Yes indeed, I think this solution is better",
"For the remaining failing test, I suggest you rebase on main 😉 ",
"You can also modify the test to make the CI go green 😉 "
] | 1,674
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Whisper feature extractor representation includes the MEL filters, a list of list that is represented as ~16,000 lines. This needlessly spams the command line. I added a `__repr__` method that replaces this list with a string `<array of shape (80, 201)>`
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21267/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21267",
"html_url": "https://github.com/huggingface/transformers/pull/21267",
"diff_url": "https://github.com/huggingface/transformers/pull/21267.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21267.patch",
"merged_at": 1676038517000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21266
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21266/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21266/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21266/events
|
https://github.com/huggingface/transformers/pull/21266
| 1,553,478,494
|
PR_kwDOCUB6oc5IWMkj
| 21,266
|
Use return_tensors="np" instead of "tf"
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
MEMBER
| null |
This PR is doing exactly the same thing as the [notebooks PR here](https://github.com/huggingface/notebooks/pull/308).
In our TF examples, we use return_tensors="tf" for the data collators. However, `prepare_tf_dataset` and `to_tf_dataset` actually use a NumPy loader internally, which we wrap with a `tf.data.Dataset` at the end. As a result, return_tensors="np" works much better for them, and avoids some weird slowdown bugs we've experienced.
This PR replaces every instance in our examples with return_tensors="np". (cc @gante, @amyeroberts, @sayakpaul just so you're aware)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21266/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21266/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21266",
"html_url": "https://github.com/huggingface/transformers/pull/21266",
"diff_url": "https://github.com/huggingface/transformers/pull/21266.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21266.patch",
"merged_at": 1674567469000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21265
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21265/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21265/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21265/events
|
https://github.com/huggingface/transformers/pull/21265
| 1,553,349,589
|
PR_kwDOCUB6oc5IVwvD
| 21,265
|
Notebook examples grouping and update
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
This PR groups the notebook examples on [this page](https://huggingface.co/docs/transformers/main/en/notebooks) by modality for easier navigation. It also adds a few notebooks from the official repo that were not previously listed, e.g. fine-tuning models for image classification, semantic segmentation, video classification, image similarity, and time series.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21265/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21265",
"html_url": "https://github.com/huggingface/transformers/pull/21265",
"diff_url": "https://github.com/huggingface/transformers/pull/21265.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21265.patch",
"merged_at": 1674496284000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21264
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21264/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21264/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21264/events
|
https://github.com/huggingface/transformers/pull/21264
| 1,553,303,293
|
PR_kwDOCUB6oc5IVm8A
| 21,264
|
Generate: save generation config with the models' `.save_pretrained()`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
MEMBER
| null |
# What does this PR do?
As originally discussed in #20388, this PR makes `model.save_pretrained()` also call `model.generation_config.save_pretrained()` if it is a generation-capable model (on all 3 frameworks).
It also adds a bunch of tests, namely:
- tests whether the generation config can be pushed to the hub
- tests whether `model.save_pretrained()` actually saves `generation_config.json` if it is a model that can generate (on all 3 frameworks)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21264/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21264",
"html_url": "https://github.com/huggingface/transformers/pull/21264",
"diff_url": "https://github.com/huggingface/transformers/pull/21264.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21264.patch",
"merged_at": 1674490904000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21263
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21263/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21263/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21263/events
|
https://github.com/huggingface/transformers/pull/21263
| 1,553,301,588
|
PR_kwDOCUB6oc5IVmlF
| 21,263
|
[Whisper] Add rescaling function with `do_normalize`
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The test works locally, merging 😉 "
] | 1,674
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #19888, by allowing the user to `normalise` the input audio before computing the MEL spectrogra,.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21263/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21263/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21263",
"html_url": "https://github.com/huggingface/transformers/pull/21263",
"diff_url": "https://github.com/huggingface/transformers/pull/21263.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21263.patch",
"merged_at": 1677763042000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21262
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21262/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21262/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21262/events
|
https://github.com/huggingface/transformers/issues/21262
| 1,553,203,083
|
I_kwDOCUB6oc5ck_-L
| 21,262
|
[Whisper] ASR Pipeline with "return_timestamps=True" gives IndexError: index -1 is out of bounds for axis 0 with size 0
|
{
"login": "MohammedRakib",
"id": 31034499,
"node_id": "MDQ6VXNlcjMxMDM0NDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/31034499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MohammedRakib",
"html_url": "https://github.com/MohammedRakib",
"followers_url": "https://api.github.com/users/MohammedRakib/followers",
"following_url": "https://api.github.com/users/MohammedRakib/following{/other_user}",
"gists_url": "https://api.github.com/users/MohammedRakib/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MohammedRakib/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MohammedRakib/subscriptions",
"organizations_url": "https://api.github.com/users/MohammedRakib/orgs",
"repos_url": "https://api.github.com/users/MohammedRakib/repos",
"events_url": "https://api.github.com/users/MohammedRakib/events{/privacy}",
"received_events_url": "https://api.github.com/users/MohammedRakib/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks, this is normal and is currently being fixed 😉 see #21252 ",
"I have also the same problem, how you fixed this problem ? ",
"@avishai119 Are you running on `transformers@main` ? It should be fixed there.",
"what you mean ?\r\nthis is my code:\r\n\r\n`\r\nfrom transformers import AutoProcessor, pipeline\r\nfrom transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline\r\nfrom transformers import WhisperProcessor, WhisperForConditionalGeneration\r\nfrom optimum.onnxruntime import ORTModelForSpeechSeq2Seq\r\nimport librosa\r\nimport torchaudio\r\nfrom pydub.silence import split_on_silence\r\n\r\n\r\nprocessor = AutoProcessor.from_pretrained('.\\saved_processor',local_files_only=True)\r\nmodel = ORTModelForSpeechSeq2Seq.from_pretrained('.\\saved_model',local_files_only=True)\r\n\r\nmodel.config.forced_decoder_ids = processor.tokenizer.get_decoder_prompt_ids(language=\"hebrew\", task=\"transcribe\")\r\nspeech_recognition_pipeline = pipeline(\r\n \"automatic-speech-recognition\",\r\n model=model,\r\n feature_extractor=processor.feature_extractor,\r\n tokenizer=processor.tokenizer,\r\n )\r\n\r\nspeech_recognition_pipeline.model.config.forced_decoder_ids = speech_recognition_pipeline.tokenizer.get_decoder_prompt_ids(language=\"hebrew\", task=\"transcribe\")\r\n#testing\r\naudio,sr = librosa.load(\"C:\\\\Users\\\\avishai\\\\Desktop\\\\whisper-interface\\\\3.mp3\",sr=16000)\r\n\r\n\r\nresult = speech_recognition_pipeline(audio,max_new_tokens=440)\r\n\r\nprint(result)\r\n`",
"when i trying to add this : \r\n `speech_recognition_pipeline(audio,max_new_tokens=440,return_timestamps=True)`\r\nit's don't work :( ",
"Do you mind outputting the output of `transformers-cli env` ?"
] | 1,674
| 1,675
| 1,674
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @Narsil @sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here is the link to [Google Colab Notebook](https://colab.research.google.com/drive/1ZLQXzD1IW2D1fz0WZOSEghUpewd0N3Bn?usp=sharing)
```python
!pip install git+https://github.com/huggingface/transformers
from transformers import pipeline
pipe = pipeline(
task="automatic-speech-recognition",
model='openai/whisper-small.en',
chunk_length_s=30,
stride_length_s=(5,5),
device=0,
return_timestamps=True,
)
pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(
language='en', task='transcribe'
)
res = pipe('trial.wav')
print(res)
```
Here is the stack trace for the error:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
[<ipython-input-8-3813911bc8bc>](https://localhost:8080/#) in <module>
1 # run with transformers installed from latest commit: 00ba7cadd812437708b380ab078a3cfe8cfaff31 at the moment.
2 # index out of bounds error with return_timestamps=True!!
----> 3 res = pipe('trial.wav')
4 print(res)
5
[/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in __call__(self, inputs, **kwargs)
370 `"".join(chunk["text"] for chunk in output["chunks"])`.
371 """
--> 372 return super().__call__(inputs, **kwargs)
373
374 def _sanitize_parameters(
[/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in __call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1074 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
1075 elif self.framework == "pt" and isinstance(self, ChunkPipeline):
-> 1076 return next(
1077 iter(
1078 self.get_iterator(
[/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py](https://localhost:8080/#) in __next__(self)
123 # We're out of items within a batch
124 item = next(self.iterator)
--> 125 processed = self.infer(item, **self.params)
126 # We now have a batch of "inferred things".
127 if self.loader_batch_size is not None:
[/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in postprocess(self, model_outputs, decoder_kwargs, return_timestamps)
620 items = _find_longest_common_sequence(final_items, self.tokenizer)
621 elif stride and self.type == "seq2seq_whisper" and return_timestamps:
--> 622 items = _find_timestamp_sequence(
623 final_items, self.tokenizer, self.feature_extractor, self.model.config.max_source_positions
624 )
[/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in _find_timestamp_sequence(sequences, tokenizer, feature_extractor, max_source_positions)
103 timestamp_tokens = sequence >= timestamp_begin
104 consecutive = np.where(timestamp_tokens[:-1] & timestamp_tokens[1:])[0] + 1
--> 105 last_timestamp = np.where(timestamp_tokens)[0][-1]
106 consecutive = np.append(consecutive, last_timestamp) if last_timestamp not in consecutive else consecutive
107 if seq_idx != 0:
IndexError: index -1 is out of bounds for axis 0 with size 0
```
### Expected behavior
The problem occurs when using any Whisper models from the Hub with ```return_timestamps=True``` in the ASR Pipeline. The error does NOT occur if timestamps is not forced.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21262/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21261
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21261/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21261/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21261/events
|
https://github.com/huggingface/transformers/issues/21261
| 1,553,133,333
|
I_kwDOCUB6oc5cku8V
| 21,261
|
installation.mdx.
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21261/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21260
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21260/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21260/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21260/events
|
https://github.com/huggingface/transformers/pull/21260
| 1,553,125,653
|
PR_kwDOCUB6oc5IVBlJ
| 21,260
|
Update TF doc test template
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Closed as changes are a subset of changes introduced in #21225 "
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
The PR #21106 introduced failures in some doctests:
* src/transformers/models/deit/modeling_tf_deit.py::transformers.models.deit.modeling_tf_deit.TFDeiTForImageClassificationWithTeacher.call
* src/transformers/models/resnet/modeling_tf_resnet.py::transformers.models.resnet.modeling_tf_resnet.TFResNetModel.call
* src/transformers/models/segformer/modeling_tf_segformer.py::transformers.models.segformer.modeling_tf_segformer.TFSegformerForImageClassification.call
* src/transformers/models/vit/modeling_tf_vit.py::transformers.models.vit.modeling_tf_vit.TFViTModel.call
This was due to `processor_class` no longer being passed to `add_code_sample_docstrings` e.g. the changes to [modeling_tf_deit.py](https://github.com/huggingface/transformers/pull/21106/files#diff-d8a9a4a182509f1903e7dbcd751d605285c8b62d0c8213a1a9ae1ba15e9fcc77).
Whilst `processor_class` could be removed for the PyTorch models doctests, `{processor_class}` hadn't been removed for the equivalent TensorFlow doctest templates. This updates the test templates to match the equivalent PyTorch ones and resolve failing tests.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21260/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21260",
"html_url": "https://github.com/huggingface/transformers/pull/21260",
"diff_url": "https://github.com/huggingface/transformers/pull/21260.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21260.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21259
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21259/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21259/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21259/events
|
https://github.com/huggingface/transformers/pull/21259
| 1,553,121,327
|
PR_kwDOCUB6oc5IVApz
| 21,259
|
Add methods to PreTrainedModel to use PyTorch's BetterTransformer
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"as a side note, since in the previous `optimum` versions the `save_pretrained` and `push_to_hub` methods [are not blocked](https://github.com/huggingface/optimum/blob/18e73f3ba4be33071f53650824fe625d3018af40/optimum/bettertransformer/transformation.py#L236), I propose to explicitly block them for transformed models in this PR and/or force users to use a certain version of `optimum`.",
"Yes we should probably force the next optimum version.",
"Should be ready @sgugger , the documentation has been extended in https://moon-ci-docs.huggingface.co/docs/transformers/pr_21259/en/perf_infer_gpu_one .\r\n\r\nLet me know if I should add a test - in which case optimum should be added in the setup.py, I guess.",
"@fxmarty there should be no need to add `optimum` in `setup.py`, we can do something similar than `bitsandbytes` and add `optimum` in the Dockerfile of the Docker image that will run the slow tests: https://github.com/huggingface/transformers/blob/0db5d911fc94604f9568b4b212e005ec4600d157/docker/transformers-all-latest-gpu/Dockerfile#L52 \r\nI very much agree that we should add tests, especially to test `accelerate` compatibility, happy to help you on this, let me know if you need help",
"Thanks, will do!\r\n\r\n> especially to test accelerate compatibility\r\n\r\nIsn't this already tested on Optimum side?",
"> Isn't this already tested on Optimum side? \r\n\r\nYes but the tests [are run on GPU](https://github.com/huggingface/optimum/blob/40a01b3c883ca3c092a4493d3f5ca524ed3109ab/tests/bettertransformer/test_bettertransformer_encoder.py#L186 ): therefore not run on any of the runners on `optimum` on a daily basis (but not sue if there are tested somewhere else) - I just asked individually to each contributor to run the `accelerate` test locally on their GPU before merging (only in case I have serious doubts that the PR breaks anything related to `accelerate`).\r\nSince in `transformers` tests are run on GPU on daily basis, we can leverage that and setup a small `BetterTransformer` testing suite that tests all the tests + `accelerate` compatibility. Also this enables us to flag anything we need to upstream to `accelerate` if something breaks `BT` integration with `accelerate`",
"There are tests on the daily basis on GPU in Optimum, for example https://github.com/huggingface/optimum/blob/main/.github/workflows/test_onnxruntime_train.yml and https://github.com/huggingface/optimum/blob/main/.github/workflows/test_onnxruntime_gpu.yml\r\n\r\nIn my opinion, thorough tests should be added in Optimum, not Transformers. The test I was thinking of in Transformers was only an integration one to check that there's no error.",
"There is an issue with `accelerate` loaded models and `transform` from BT, let's wait until this gets fixed before merging this PR",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"not stale",
"If you want this PR included in the next release, you should finish the work and have it merged sooner rather than later :-)\r\nThe last I saw was Younes telling we should wait for a fix, was that fix added? Then this needs a rebase on main since it has been a while.",
"Thanks for the headsup! \r\nIndeed we are working on fixing some bugs on `optimum` side that was introduced by one of my PRs (the revert-transform PR) before adding the `invert_transform` method\r\nWe can maybe merge this PR by keeping only `transform` method and blocking the `save_pretrained` & `push_to_hub` methods after transforming the model",
"> you should finish the work and have it merged sooner rather than later :-)\r\n\r\nThere is substantial work left in Optimum before this should be merged. Marking as draft for now!",
"OK, so this won't be in the next release of Transformers (probably this week in preparation for PyTorch 2.0).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @fxmarty and @younesbelkada, are there standing PRs in `optimum` that need to be merged for this to proceed/anything we can help with to have this move forward? Thanks :)",
"Hey @LysandreJik @sgugger \r\n@fxmarty recently managed to fix all issues related to decoder-based models integration in `optimum`! I believe that this PR could be re-opened, in my understanding we just need to add few tests and we should be good to go",
"@sgugger @LysandreJik this is now ready for review! "
] | 1,674
| 1,682
| 1,682
|
COLLABORATOR
| null |
As per title.
Should be merged only on the next Optimum release that will include https://github.com/huggingface/optimum/pull/676
## Before submitting
Tests are still to be done.
## Who can review?
@younesbelkada @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21259/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21259",
"html_url": "https://github.com/huggingface/transformers/pull/21259",
"diff_url": "https://github.com/huggingface/transformers/pull/21259.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21259.patch",
"merged_at": 1682586222000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21258
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21258/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21258/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21258/events
|
https://github.com/huggingface/transformers/pull/21258
| 1,553,120,483
|
PR_kwDOCUB6oc5IVAeO
| 21,258
|
Add missing checkpoint for doctest
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
The checkpoint for mobilenetv2 was accidentally removed in #21106 (see file change [here](https://github.com/huggingface/transformers/pull/21106/files#diff-f224d96e46d68f58f9632184f915210b3217a61253c42ff6354763e1b0f34050)), resulting in failing doctests. This adds it back.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21258/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21258",
"html_url": "https://github.com/huggingface/transformers/pull/21258",
"diff_url": "https://github.com/huggingface/transformers/pull/21258.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21258.patch",
"merged_at": 1674487646000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21257
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21257/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21257/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21257/events
|
https://github.com/huggingface/transformers/pull/21257
| 1,553,114,587
|
PR_kwDOCUB6oc5IU_Mx
| 21,257
|
[ci-daily] Fix pipeline tests
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Should fix the `automatic_speech_recognition_pipeline` tests.
Also using `streaming` dataset to speed up tests. Think it is a good idea if we are only using 1 data.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21257/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21257",
"html_url": "https://github.com/huggingface/transformers/pull/21257",
"diff_url": "https://github.com/huggingface/transformers/pull/21257.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21257.patch",
"merged_at": 1674498770000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21256
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21256/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21256/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21256/events
|
https://github.com/huggingface/transformers/pull/21256
| 1,553,066,405
|
PR_kwDOCUB6oc5IU0se
| 21,256
|
Fix MaskFormerImageProcessor.post_process_instance_segmentation
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for fixing! Should we add a corresponding test for it, which verifies the postprocessed results?\r\n\r\nI added a test but Mask2Former, unlike MaskFormer, outputs segmentation maps of shape (96, 96) instead of the preprocessed input size for efficiency. They scale the mask logits to the preprocessed image size during postprocessing (same for semantic and panoptic segmentation), even if no` target_sizes` is passed. I think it'd better to add an image processor for Mask2Former as its post-processing requires additional scaling.\r\n\r\nWhat do you think @NielsRogge @sgugger?",
"If postprocessing is different, then it indeed requires its own image processor class."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the `post_process_instance_segmentation` method of `MaskFormerImageProcessor`. This issue mainly affects Mask2Former as it uses MaskFormerImageProcessor and there aren't any MaskFormer models trained on instance segmentation datasets.
Unlike panoptic segmentation post-processing, the final score of each binary mask proposal is calculated by multiplying the mask proposal score with the class score. `mask_threshold` and `overlap_mask_area_threshold` arguments are not needed anymore, I can either add a warning to deprecate them or leave it as it is for now.
Post-processed results of the `mask2former-swin-small-coco-instance` model inference:

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21256/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21256",
"html_url": "https://github.com/huggingface/transformers/pull/21256",
"diff_url": "https://github.com/huggingface/transformers/pull/21256.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21256.patch",
"merged_at": 1674575370000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21255
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21255/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21255/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21255/events
|
https://github.com/huggingface/transformers/issues/21255
| 1,553,064,306
|
I_kwDOCUB6oc5ckeFy
| 21,255
|
DistilBertModel to sequence classification
|
{
"login": "Rane90",
"id": 34508332,
"node_id": "MDQ6VXNlcjM0NTA4MzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/34508332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rane90",
"html_url": "https://github.com/Rane90",
"followers_url": "https://api.github.com/users/Rane90/followers",
"following_url": "https://api.github.com/users/Rane90/following{/other_user}",
"gists_url": "https://api.github.com/users/Rane90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rane90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rane90/subscriptions",
"organizations_url": "https://api.github.com/users/Rane90/orgs",
"repos_url": "https://api.github.com/users/Rane90/repos",
"events_url": "https://api.github.com/users/Rane90/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rane90/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! These kind of question have more sense if you ask in the [forum](https://discuss.huggingface.co/), as it is not exactly an issue nor a bug. Nevertheless, IMO you should use `DistilBertForSequenceClassification` and just modify the `max_position_embeddings`. \r\n\r\nClosing as it is not an issue"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
Hi,
I'm trying to create a ```DistilBertModel ``` model for sequence classification, such that ```max_position_embeddings=1024``` (otherwise I would have used ```DistilBertForSequenceClassification``` which is defult to ```max_position_embeddings=512``` )
I define the model in the following way:
```
configuration = DistilBertConfig(max_position_embeddings=1024)
model = DistilBertModel(configuration)
```
When forwarding an input to the model in the following way:
```
output = model(ids, attention_mask = mask, return_dict=False)[0]
```
such that ```ids.shape = (batch_size, 1024)``` and ```mask.shape = (batch_size, 1024)``` the shape of the output is ```(batch_size, 1024, 768)``` .
My question is: What is the best practice to convert this output into a probability vector over the number of labels such the modified output shape would be ``(batch_size, num_labels)```?
I thought of a few options including flattening the current output + an additional FC layer, but I'm not sure this is the best practice.
would it be possible to add a config parameter for ```DistilBertConfig``` to automatically enable this behavior?
Thank you in advance :)
@ArthurZucker , @younesbelkada
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import DistilBertConfig, DistilBertModel
configuration = DistilBertConfig(max_position_embeddings=1024)
model = transformers.BertModel.from_pretrained('bert-base-uncased')
output = self.l1(ids, mask, return_dict=False)[0]
print(output.shape)
# (batch_size, 1024, 768)
````
### Expected behavior
My question is: What is the best practice to convert this output into a probability vector over the number of labels such the modified output shape would be ``(batch_size, num_labels)```?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21255/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21254
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21254/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21254/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21254/events
|
https://github.com/huggingface/transformers/pull/21254
| 1,553,049,918
|
PR_kwDOCUB6oc5IUxHT
| 21,254
|
Fix reformer CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Some fixes are required for doctest after #21199. See comments in the review.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21254/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21254",
"html_url": "https://github.com/huggingface/transformers/pull/21254",
"diff_url": "https://github.com/huggingface/transformers/pull/21254.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21254.patch",
"merged_at": 1674484454000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21253
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21253/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21253/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21253/events
|
https://github.com/huggingface/transformers/pull/21253
| 1,552,924,774
|
PR_kwDOCUB6oc5IUV7t
| 21,253
|
[WIP] Adding GPT2 with Multi Query Attention
|
{
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Regarding tests `test_batch_generation` and `test_batch_generation_2heads`. If token initialisation class is changed form `GPT2Tokenizer` to `GPT2TokenizerFast` the test passes through until generated tokens assertion. Is it intended behaviour or the loading functionality should have rerouted from the default class?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing in favour of #22575 "
] | 1,674
| 1,682
| 1,682
|
MEMBER
| null |
# Adding GPT2 with Multi Query Attention
This PR adds a GPT2 architecture with Multi Query Attention (MQA). With MQA the V,K weights are shared across heads and only Qs are unique which makes it possible to run the model with very large batches.
This is the Architecture used in [BigCode's SantaCoder](https://huggingface.co/bigcode/santacoder).
There are a few things to do before we can merge the PR:
- add performance improvements suggested by @jlamypoirier
- fix tests:
- there is an issue with `past`
- there is an issue with loading the tokenizer (i guess missing vocab file in repo?)
- fix the generation examples
You can run the tests with:
```bash
RUN_SLOW=1 python -m pytest -s -v ./tests/models/gpt2mqa/
```
cc @bigximik @jlamypoirier @RaymondLi0
To review when ready I tag @ArthurZucker and @younesbelkada.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21253/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21253",
"html_url": "https://github.com/huggingface/transformers/pull/21253",
"diff_url": "https://github.com/huggingface/transformers/pull/21253.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21253.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21252
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21252/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21252/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21252/events
|
https://github.com/huggingface/transformers/pull/21252
| 1,552,877,795
|
PR_kwDOCUB6oc5IULuD
| 21,252
|
[Whisper] Refactor whisper
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"\"The language is automatically detected\". From my experience the language detection by Whisper is very unreliable. Will it still be possible to specify language?",
"Sure, let's make sure we still allow the language to be past! Thanks for pointing this out",
"Once #21257 is merged, the tests here should also pass ! ",
"Pipeline tests need #21269 to be merge 😉 ",
"The two failing tests are from the latest modification of the multilingual tokenizer's config"
] | 1,674
| 1,706
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
The goal of this PR is to allow the users to do the following :
```python
...
whisper_model.generate(audio, return_timestamps = True)
whisper_model.generate(audio, return_timestamps = True, task = Transcribe)
```
The language is automatically detected. This also simplifies the pipeline calls, and add a good example of `generation_config` 's intended usage.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21252/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21252/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21252",
"html_url": "https://github.com/huggingface/transformers/pull/21252",
"diff_url": "https://github.com/huggingface/transformers/pull/21252.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21252.patch",
"merged_at": 1674648584000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21251
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21251/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21251/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21251/events
|
https://github.com/huggingface/transformers/pull/21251
| 1,552,872,673
|
PR_kwDOCUB6oc5IUKoN
| 21,251
|
Generate: precision fix in compute_transition_scores doctests
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
MEMBER
| null |
# What does this PR do?
See title -- it was causing doctests to fail.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21251/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21251",
"html_url": "https://github.com/huggingface/transformers/pull/21251",
"diff_url": "https://github.com/huggingface/transformers/pull/21251.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21251.patch",
"merged_at": 1674472431000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21250
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21250/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21250/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21250/events
|
https://github.com/huggingface/transformers/pull/21250
| 1,552,867,539
|
PR_kwDOCUB6oc5IUJhf
| 21,250
|
[Whisper] fix all issues with unk token
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Previously, all OOV ( and thus timestamp tokens) outputed by the model are decoded to `<|endoftext|>` by the `xxx.en` whisper models. This does not happen with the multilingual model only because I added `""` to the vocabulary, and the `unk_token_id` is the same `""`. But this does not really make sense.
As the default behavior for Whisper is just to outptu `""` for any OOV, now the `_convert_id_to_token` function does not use a `unk_token`.
This will fix the inconsistency, and will help for the whisper refactoring.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21250/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21250",
"html_url": "https://github.com/huggingface/transformers/pull/21250",
"diff_url": "https://github.com/huggingface/transformers/pull/21250.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21250.patch",
"merged_at": 1674501597000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21249
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21249/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21249/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21249/events
|
https://github.com/huggingface/transformers/issues/21249
| 1,552,619,152
|
I_kwDOCUB6oc5cixaQ
| 21,249
|
Unable to use GPU during wav2vec2 decoding
|
{
"login": "manjuke",
"id": 6142443,
"node_id": "MDQ6VXNlcjYxNDI0NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6142443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manjuke",
"html_url": "https://github.com/manjuke",
"followers_url": "https://api.github.com/users/manjuke/followers",
"following_url": "https://api.github.com/users/manjuke/following{/other_user}",
"gists_url": "https://api.github.com/users/manjuke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manjuke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manjuke/subscriptions",
"organizations_url": "https://api.github.com/users/manjuke/orgs",
"repos_url": "https://api.github.com/users/manjuke/repos",
"events_url": "https://api.github.com/users/manjuke/events{/privacy}",
"received_events_url": "https://api.github.com/users/manjuke/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"`pyctcdecode` doesn't support GPU",
"Indeed, `pyctcdecode` is a CPU only decoding method. PyTorch recently released a fast beam search decoder with a Flashlight backend: https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/\r\n\r\nWe could look at integrating this into transformers for faster CTC + LM decoding!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,678
| 1,678
|
NONE
| null |
### System Info
Hi All,
I have built a finetuned model for Tamil using facebook/wav2vec2-xls-r-300m. I could do inferencing successfully using CPU. However, Wav2vec2 decoding (with pyctcdecoder) using GPU is not working. I have tried enabling device='gpu' in the decoding script, and also running the decoding script as “python -m torch.distributed.launch --nproc_per_node=<num of GPUs> <Decoding_Script.py>”. But, on monitoring nvidia-smi output during decoding, none of these methods are using GPU for decoding. Pls suggest @sanchit-gandhi . Thanks
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use 'cuda' as device for decoding
audio_name=pd.DataFrame(test_data).audio[i]
text_org = pd.DataFrame(test_data).text[i]
audio_input, sample_rate = sf.read(audio_name)
#withLM
input_values = processor_with_lm(audio_input, sampling_rate=16000, return_tensors="pt").input_values
logits = model(input_values).logits
hypothesis = processor_with_lm.batch_decode(logits.detach().numpy()).text
text_with_lm = hypothesis[0]
#withoutLM
input_values_wo = processor(audio_input, sampling_rate=16000, return_tensors="pt").input_values
logits_wo = model(input_values_wo).logits
predicted_ids = torch.argmax(logits_wo, dim=-1)
hypothesis_wo_lm = processor.decode(predicted_ids[0])
text_wo_lm = hypothesis_wo_lm.replace('[PAD]','')
### Expected behavior
GPU decoding should have happened and nvidia-smi output to be shown accordingly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21249/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21248
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21248/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21248/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21248/events
|
https://github.com/huggingface/transformers/issues/21248
| 1,552,604,015
|
I_kwDOCUB6oc5cittv
| 21,248
|
add interface or integration to provide interpretability/explainability of hugging face models
|
{
"login": "aahmadai",
"id": 49294247,
"node_id": "MDQ6VXNlcjQ5Mjk0MjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/49294247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aahmadai",
"html_url": "https://github.com/aahmadai",
"followers_url": "https://api.github.com/users/aahmadai/followers",
"following_url": "https://api.github.com/users/aahmadai/following{/other_user}",
"gists_url": "https://api.github.com/users/aahmadai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aahmadai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aahmadai/subscriptions",
"organizations_url": "https://api.github.com/users/aahmadai/orgs",
"repos_url": "https://api.github.com/users/aahmadai/repos",
"events_url": "https://api.github.com/users/aahmadai/events{/privacy}",
"received_events_url": "https://api.github.com/users/aahmadai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
NONE
| null |
### Feature request
require model answer requests on training biases and model transparency for interpretability and explainability within workflow of such model processes, currently there is no support for this on the platform
e.g.
a form of blackbox testing on the models
a form of ui interface
a way to evaluate
a way to visualize
a linege of data changes in learning process
a publically available benchmarking of models
a way to retune the models - debasing bias
perhaps, linkage to captum/lime, or other such tooling
### Motivation
regulation requirements for trustworthy ai (to be able to answer how the model learned this for correctness, transparency, and fairness)
to be able to correct the biases in training datasets
this is important because you have an array of models which support zero transparency.
it is also important for progressing ai.
to build model lineage
to provide for continued compliance and data governance across different geographic regulations in use of such models.
fundamentally, to answer these questions:
how the result was produced
whether the model was correct in producing such a result based on the implementation
[https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html]
### Your contribution
not sure how I can help if the developers have yet to even add such feature and make themselves unapproachable, this is something that is constantly overlooked by the ML/DL community with a lot of marketing hype where they cannot fully explain outside of a research paper how the model process reached that result.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21248/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21246
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21246/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21246/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21246/events
|
https://github.com/huggingface/transformers/issues/21246
| 1,552,330,878
|
I_kwDOCUB6oc5chrB-
| 21,246
|
BERT Embedding Weights VS Last Hidden State
|
{
"login": "leejiayi098",
"id": 51938633,
"node_id": "MDQ6VXNlcjUxOTM4NjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/51938633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leejiayi098",
"html_url": "https://github.com/leejiayi098",
"followers_url": "https://api.github.com/users/leejiayi098/followers",
"following_url": "https://api.github.com/users/leejiayi098/following{/other_user}",
"gists_url": "https://api.github.com/users/leejiayi098/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leejiayi098/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leejiayi098/subscriptions",
"organizations_url": "https://api.github.com/users/leejiayi098/orgs",
"repos_url": "https://api.github.com/users/leejiayi098/repos",
"events_url": "https://api.github.com/users/leejiayi098/events{/privacy}",
"received_events_url": "https://api.github.com/users/leejiayi098/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"After much more digging, I found a superb in-depth explanation by Alexey Kravets in this article: \r\nhttps://towardsdatascience.com/deep-dive-into-the-code-of-bert-model-9f618472353e\r\n\r\nApparently, the last hidden state returns the context-aware representations of the word embeddings, which are calculated from the pretrained model weights via normalization and the attention mechanism (using pretrained weights and biases for the Q, K and V matrices). \r\n\r\nEssentially: feeding the pretrained feature vectors into the pretrained encoder model. For sequence-based tasks, this method is definitely more appropriate as compared to using the unprocessed context-invariant embeddings. ",
"@leejiayi098 Hi, I've also came into this problem. So if I want to get all word embeddings in my vocab with the last hidden state, is there any easy way to do? Otherwise, I have to construct the input text and using model to do inference.",
"so this suggests that we should use the `output.last_hidden_state` as embeddings for practical applications? as those embeddings are context-aware rather than the other one right?\r\n\r\ni also found a huggingface official blog ([here](https://huggingface.co/blog/getting-started-with-embeddings)) on embeddings which states about using a prebuilt endpoint with the code: \r\n\r\n```python3\r\nimport requests\r\n\r\napi_url = f\"https://api-inference.huggingface.co/pipeline/feature-extraction/{model_id}\"\r\nheaders = {\"Authorization\": f\"Bearer {hf_token}\"}\r\n```\r\n\r\nso considering my first statement is correct regarding `last_hidden_state`, which one does this endpoint actually return? `last_hidden_state` or the other one?\r\n\r\nany help is truly appreciated. thanks.\r\n\r\ncc @leejiayi098 @Jackie-shi @vanpelt @tmm1 \r\n"
] | 1,674
| 1,694
| 1,674
|
NONE
| null |
### BERT for Feature Extraction: Embedding Weights VS Last Hidden State
I am trying to extract the pretrained BERT token embeddings and get the feature vector for any specific token by indexing the token ID. However, I found that indexing the pretrained embedding matrix returns very different values as compared to feeding the token IDs into the encoder to get the embeddings. Am I missing something? I can't find this anywhere in the official documentation/tutorials and I've seen the use of both methods to extract features, which is quite concerning if implemented without proper understanding.
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
# get pretrained token embeddings
embedding_matrix = model.embeddings.word_embeddings.weight
# look up embeddings token sequence
embeddings = embedding_matrix[input['input_ids']]
# feed input into pretrained encoder
input = tokenizer(text, return_tensors='pt')
output = model(**input)
# why are the token embeddings different before fine-tuning?
torch.all(embeddings == output.last_hidden_state)
```
**Examples:**
_**text = "hello"**_
embeddings
```
tensor([[[ 0.0136, -0.0265, -0.0235, ..., 0.0087, 0.0071, 0.0151],
[-0.0043, -0.0330, -0.0217, ..., -0.0425, -0.0127, -0.0389],
[-0.0145, -0.0100, 0.0060, ..., -0.0250, 0.0046, -0.0015]]],
grad_fn=<IndexBackward0>)
```
output.last_hidden_state
```
tensor([[[-0.3061, 0.2622, -0.1896, ..., -0.1651, 0.1014, 0.4119],
[-0.7390, -0.0336, 0.3932, ..., -0.1818, -0.1839, -0.2185],
[ 0.5801, 0.0627, -0.2637, ..., 0.3963, -0.5684, -0.4924]]],
grad_fn=<NativeLayerNormBackward0>)
```
_**text = "hello!"**_
embeddings
```
tensor([[[ 0.0136, -0.0265, -0.0235, ..., 0.0087, 0.0071, 0.0151],
[-0.0043, -0.0330, -0.0217, ..., -0.0425, -0.0127, -0.0389],
[ 0.0298, -0.0373, -0.0356, ..., 0.0161, 0.0192, 0.0173],
[-0.0145, -0.0100, 0.0060, ..., -0.0250, 0.0046, -0.0015]]],
grad_fn=<IndexBackward0>)
```
output.last_hidden_state
```
tensor([[[-0.0509, 0.1088, -0.1411, ..., -0.1243, -0.0803, 0.2858],
[-0.6771, -0.5464, 0.0878, ..., -0.0575, 0.0359, -0.3080],
[-1.0903, -0.9996, -0.5636, ..., 0.3232, -0.2773, -0.1463],
[ 0.8302, 0.0501, -0.2251, ..., 0.3216, -0.6489, -0.2456]]],
grad_fn=<NativeLayerNormBackward0>)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21246/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21245
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21245/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21245/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21245/events
|
https://github.com/huggingface/transformers/pull/21245
| 1,552,277,100
|
PR_kwDOCUB6oc5ISMPA
| 21,245
|
[GIT] Convert more checkpoints
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Microsoft open-sourced some more GIT checkpoints (see https://github.com/microsoft/GenerativeImage2Text/issues/34#issuecomment-1374378625), hence I've converted them by extending the conversion script.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21245/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21245/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21245",
"html_url": "https://github.com/huggingface/transformers/pull/21245",
"diff_url": "https://github.com/huggingface/transformers/pull/21245.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21245.patch",
"merged_at": 1674483568000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21244
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21244/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21244/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21244/events
|
https://github.com/huggingface/transformers/issues/21244
| 1,552,257,574
|
I_kwDOCUB6oc5chZIm
| 21,244
|
Models not in eval()-mode when loaded with from_config()
|
{
"login": "hadsed",
"id": 2019168,
"node_id": "MDQ6VXNlcjIwMTkxNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2019168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadsed",
"html_url": "https://github.com/hadsed",
"followers_url": "https://api.github.com/users/hadsed/followers",
"following_url": "https://api.github.com/users/hadsed/following{/other_user}",
"gists_url": "https://api.github.com/users/hadsed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadsed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadsed/subscriptions",
"organizations_url": "https://api.github.com/users/hadsed/orgs",
"repos_url": "https://api.github.com/users/hadsed/repos",
"events_url": "https://api.github.com/users/hadsed/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadsed/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"A model created with `from_config` will have random weights and is thus not suitable for inference. This is why it is put in training mode, as the documentation clearly states. In any case, it has been the case for such a long time that reverting this would surprise way more users with a breaking change.",
"The code snippet I posted above does not load random weights.",
"Yes it does.",
"Well that explains a lot. I stand corrected, thank you."
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
The docs say that models loaded with `from_pretrained()` are done so with `model.eval()` mode on by default. But when using `from_config()` that's not the case, even though loading configs and tokenizers would be using `from_pretrained()` like so:
```python
config = AutoConfig.from_pretrained(
MODEL_NAME,
padding='max_length',
truncation=True,
output_hidden_states=True,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, config=config)
model = AutoModelForSequenceClassification.from_config(config)
```
I'd like to argue that we should put the model in `eval()` mode when using `from_config()`. I know at least 2 other people who have spent a great number of hours validating and hunting for that. Similar reasoning to https://github.com/huggingface/transformers/issues/695#issuecomment-502964803 I think it's important to make things deterministic out of the box.
Or, open to understanding why that wouldn't be the case.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Try something like this:
```python
config = AutoConfig.from_pretrained(
MODEL_NAME,
padding='max_length',
truncation=True,
output_hidden_states=True,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, config=config)
model = AutoModelForSequenceClassification.from_config(config)
```
### Expected behavior
I'd expect the model to be loaded in `eval()` mode.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21244/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21243
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21243/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21243/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21243/events
|
https://github.com/huggingface/transformers/issues/21243
| 1,552,230,382
|
I_kwDOCUB6oc5chSfu
| 21,243
|
How to create distil-opt/bloom
|
{
"login": "omerarshad",
"id": 16164105,
"node_id": "MDQ6VXNlcjE2MTY0MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/16164105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omerarshad",
"html_url": "https://github.com/omerarshad",
"followers_url": "https://api.github.com/users/omerarshad/followers",
"following_url": "https://api.github.com/users/omerarshad/following{/other_user}",
"gists_url": "https://api.github.com/users/omerarshad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omerarshad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omerarshad/subscriptions",
"organizations_url": "https://api.github.com/users/omerarshad/orgs",
"repos_url": "https://api.github.com/users/omerarshad/repos",
"events_url": "https://api.github.com/users/omerarshad/events{/privacy}",
"received_events_url": "https://api.github.com/users/omerarshad/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is not an issue, could you maybe ask the question in the [forum](https://discuss.huggingface.co/)? Also, the answer is no. @younesbelkada worked a bit on this so he can answer if you ping him on the forum."
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
Is there any script to create a distilled version of opt or bloom model?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21243/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21242
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21242/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21242/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21242/events
|
https://github.com/huggingface/transformers/pull/21242
| 1,552,066,891
|
PR_kwDOCUB6oc5IRi75
| 21,242
|
[`pipeline`] add explicit `ValueError` if you don't pass a valid arg
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21242). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21242/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21242",
"html_url": "https://github.com/huggingface/transformers/pull/21242",
"diff_url": "https://github.com/huggingface/transformers/pull/21242.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21242.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21241
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21241/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21241/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21241/events
|
https://github.com/huggingface/transformers/pull/21241
| 1,552,065,438
|
PR_kwDOCUB6oc5IRiqG
| 21,241
|
Add Japanese translation installation.mdx
|
{
"login": "kambehmw",
"id": 22996144,
"node_id": "MDQ6VXNlcjIyOTk2MTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/22996144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kambehmw",
"html_url": "https://github.com/kambehmw",
"followers_url": "https://api.github.com/users/kambehmw/followers",
"following_url": "https://api.github.com/users/kambehmw/following{/other_user}",
"gists_url": "https://api.github.com/users/kambehmw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kambehmw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kambehmw/subscriptions",
"organizations_url": "https://api.github.com/users/kambehmw/orgs",
"repos_url": "https://api.github.com/users/kambehmw/repos",
"events_url": "https://api.github.com/users/kambehmw/events{/privacy}",
"received_events_url": "https://api.github.com/users/kambehmw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds Japanese translation to installation.mdx
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Partially addresses #18413
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21241/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21241",
"html_url": "https://github.com/huggingface/transformers/pull/21241",
"diff_url": "https://github.com/huggingface/transformers/pull/21241.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21241.patch",
"merged_at": 1674484711000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21240
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21240/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21240/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21240/events
|
https://github.com/huggingface/transformers/issues/21240
| 1,552,034,771
|
I_kwDOCUB6oc5cgivT
| 21,240
|
AutoTokenizer loading fails with `object has no attribute 'config'`
|
{
"login": "fdalvi",
"id": 859719,
"node_id": "MDQ6VXNlcjg1OTcxOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/859719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fdalvi",
"html_url": "https://github.com/fdalvi",
"followers_url": "https://api.github.com/users/fdalvi/followers",
"following_url": "https://api.github.com/users/fdalvi/following{/other_user}",
"gists_url": "https://api.github.com/users/fdalvi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fdalvi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fdalvi/subscriptions",
"organizations_url": "https://api.github.com/users/fdalvi/orgs",
"repos_url": "https://api.github.com/users/fdalvi/repos",
"events_url": "https://api.github.com/users/fdalvi/events{/privacy}",
"received_events_url": "https://api.github.com/users/fdalvi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @fdalvi the code runs as expected if you use `pipeline` instead of `TokenClassificationPipeline`,\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline\r\n\r\nmodel_name = \"QCRI/bert-base-multilingual-cased-pos-english\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModelForTokenClassification.from_pretrained(model_name)\r\n\r\npipe = pipeline(task=\"token-classification\", model=model, tokenizer=tokenizer)\r\noutputs = pipe(\"A test example\")\r\nprint(outputs)\r\n```\r\n>>[{'entity': 'DT', 'score': 0.9997243, 'index': 1, 'word': 'A', 'start': 0, 'end': 1}, {'entity': 'NN', 'score': 0.9997472, 'index': 2, 'word': 'test', 'start': 2, 'end': 6}, {'entity': 'NN', 'score': 0.99973196, 'index': 3, 'word': 'example', 'start': 7, 'end': 14}]\r\n\r\nI think there might be a problem with `TokenClassificationPipeline`.\r\nEDIT - as mentioned by @younesbelkada there is no problem with `TokenClassificationPipeline`, it was due to not passing positional arguments correctly, sorry I completely overlooked that part!",
"Thanks @susnato for narrowing it down and for the quick temporary fix! Hope this makes it easier to figure out what the underlying issue is.",
"Hi @fdalvi \r\nThanks for the issue, you need to pass explicit positional arguments into `TokenClassificationPipeline` to make it work. The snippet below works fine:\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForTokenClassification, TokenClassificationPipeline\r\n\r\nmodel_name = \"QCRI/bert-base-multilingual-cased-pos-english\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModelForTokenClassification.from_pretrained(model_name)\r\n\r\npipeline = TokenClassificationPipeline(model=model, tokenizer=tokenizer)\r\noutputs = pipeline(\"A test example\")\r\n```\r\nthe snippet shared by @susnato will also fail if you don't pass positional arguments",
"Ah thats an easy fix! Thanks a lot for the quick response."
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following script:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, TokenClassificationPipeline
model_name = "QCRI/bert-base-multilingual-cased-pos-english"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
pipeline = TokenClassificationPipeline(model, tokenizer)
outputs = pipeline("A test example")
print(outputs)
```
### Expected behavior
Since this is a part-of-speech model, I expect part-of-speech tags for "A test example". This works as expected in atleast version `4.2.0`.
With the lastest (`4.25.1`), the tokenizer loading fails with the error:
`AttributeError: 'BertTokenizerFast' object has no attribute 'config'`
Forcing the python tokenizer by setting `use_fast=False` changes the error to:
`AttributeError: 'BertTokenizer' object has no attribute 'config'`
Since the model and the code worked recently, is this a regression or is there a (intended) breaking change in the recent versions? Either way, whats the best way to fix the model/code to make it work again?
Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21240/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21239
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21239/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21239/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21239/events
|
https://github.com/huggingface/transformers/pull/21239
| 1,552,012,846
|
PR_kwDOCUB6oc5IRZEo
| 21,239
|
[WIP] Add UDOP models
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger @NielsRogge The model weights are here https://huggingface.co/ZinengTang/Udop/tree/main , But how to get the config for these models ? \r\n",
"@raghavanone For reference, someone asked the same question on the UDOP repo: https://github.com/microsoft/i-Code/issues/17",
"Note: Cannot proceed further without microsoft releasing the entire weights. Currently vision decoder weights have not been released.",
"If I'm not mistaken, vision decoder weights should not be needed when using the text layout decoder part, only.\r\n`vision_encoder` weights are part of the shared model weights.",
"@raghavanone is there anything else blocking? It sounds like we can proceed with the given weights, assuming that we notify users that the vision decoder is not trained. ",
"@logan-markewich Yes, I will work on closing this within couple of days . ",
"@sgugger Need some pointers on How should this model be tested ? Can I follow the tests used for T5 model and replicate similar tests ? ",
"@NielsRogge Any pointer here ? ",
"I hope it gets merged soon @raghavanone . Nice work :)",
"Forgive my naiveté, why do all the tests call `from_pretrained()` on some variation of `t5`? The UDOP model checkpoints are [here](https://huggingface.co/ZinengTang/Udop/tree/main). Could these be used?",
"Ah, I see that the test script they provide also [uses T5-large](https://github.com/microsoft/i-Code/blob/main/i-Code-Doc/scripts/finetune_rvlcdip.sh), I expected it to use one of those checkpoints",
"@raghavanone how are things going with this so far? I'm very interested in using this model as soon as it gets integrated - if you need a hand with anything let me know! And thanks for bringing it into the library 😄 \r\n ",
"> @raghavanone how are things going with this so far? I'm very interested in using this model as soon as it gets integrated - if you need a hand with anything let me know! And thanks for bringing it into the library 😄\r\n\r\n@thefirebanks I am working on fixing last few tests. Hoping to close this PR very soon. Sorry for the delay.",
"@raghavanone I am currently trying to finetune `UdopUniModelForConditionalGeneration` using this PR. I ran into the following exception while training:\r\n\r\n```\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/udop/modeling_udop.py\", line 2422, in forward\r\n encoder_outputs = self.encoder(\r\n TypeError: forward() got an unexpected keyword argument 'ids_keep'`\r\n```\r\n\r\nI explained what appears to be happening in [this comment](https://github.com/huggingface/transformers/commit/ea7e44ca37d14d24798ed938b52ce3b2a202816f#r103307941).\r\n\r\nIt looks like the `ids_keep` parameter was removed from `UdopUniStack` but not removed from the call to it in `UdopUniModelForConditionalGeneration`\r\n\r\n**EDIT**\r\nLooks like `output_attentions`, also needs to be removed\r\nAnd in the `self.decoder()` call, `cross_attn_head_mask`, `output_attentions` \r\n\r\nHappy to make the changes myself with repo permissions\r\n\r\n",
"> @raghavanone I am currently trying to finetune `UdopUniModelForConditionalGeneration` using this PR. I ran into the following exception while training:\r\n> \r\n> ```\r\n> File \"/opt/conda/lib/python3.8/site-packages/transformers/models/udop/modeling_udop.py\", line 2422, in forward\r\n> encoder_outputs = self.encoder(\r\n> TypeError: forward() got an unexpected keyword argument 'ids_keep'`\r\n> ```\r\n> \r\n> I explained what appears to be happening in [this comment](https://github.com/huggingface/transformers/commit/ea7e44ca37d14d24798ed938b52ce3b2a202816f#r103307941).\r\n> \r\n> It looks like the `ids_keep` parameter was removed from `UdopUniStack` but not removed from the call to it in `UdopUniModelForConditionalGeneration`\r\n> \r\n> **EDIT** Looks like `output_attentions`, also needs to be removed And in the `self.decoder()` call, `cross_attn_head_mask`, `output_attentions`\r\n> \r\n> Happy to make the changes myself with repo permissions\r\n\r\n@plamb-viso Yes, removing those parameters were not done in all places, I have fixed it locally. I am working on fixing failing tests. This the last step pending for merging. Fixing these tests are taking more time than expected. ",
"@raghavanone I saw you closed this PR. Skimming over your work, the PR seemed to be in a rather good state. Where there any blockers you encountered? IMO, it would be nice to add UDOP models in Hugginface at some point.",
"@maxjeblick @NielsRogge feels that the code original repo is bit hacky, he is working a separate PR to UDOP in better implementation, so closed this in consultation with him. He should open a PR soon .\n\n@NielsRogge please do add more details for the benefit of folks following this PR ",
"Thanks a lot for the fast reply!",
"@NielsRogge @raghavanone please link the new PR when its available for people subscribed to this one",
"Hi yes I'll open a PR soon! Thanks a lot for your work already @raghavanone, will ping you on the PR ",
"Hi @NielsRogge I saw the large amount of commits on your new UDOP branch, curious if you have any idea on when you think a PR might be ready",
"Sorry to keep hammering on this, but again have noticed a flurry of activity on that branch then almost 2 weeks off. Curious what the plan is for it @NielsRogge ",
"Hi @plamb-viso sorry for the late reply, the model is working, only have limited time to work on it. I'll open a PR this weekend/Monday.\r\n\r\nFor now you can already use the model if you're curious, check [this code example](https://github.com/NielsRogge/transformers/blob/14f327d1e9804aeddbe420bd44b811945a3aadd4/tests/models/udop/test_modeling_udop.py#L363) regarding usage. Model is already on the hub [here](https://huggingface.co/nielsr/udop-large).",
"Out of curiosity @NielsRogge : did you ever use your implementation to fine tune it on a task like CORD?",
"I've fine-tuned the model on a [toy dataset of RVL-CDIP](https://huggingface.co/datasets/nielsr/rvl_cdip_10_examples_per_class), works well but the model is pretty heavy, got OOM on Google Colab even with batch size = 1 so had to use a bigger GPU. The author only released large variants. ",
"In my original work on @raghavanone 's version of the model, I also had to use a batch size of 1 to get it to not OOM on 40gb GPUs"
] | 1,674
| 1,682
| 1,678
|
CONTRIBUTOR
| null |
#20650
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21239/reactions",
"total_count": 7,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21239/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21239",
"html_url": "https://github.com/huggingface/transformers/pull/21239",
"diff_url": "https://github.com/huggingface/transformers/pull/21239.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21239.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21238
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21238/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21238/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21238/events
|
https://github.com/huggingface/transformers/issues/21238
| 1,551,896,514
|
I_kwDOCUB6oc5cgA_C
| 21,238
|
Statement seems to have no effect
|
{
"login": "daskol",
"id": 9336514,
"node_id": "MDQ6VXNlcjkzMzY1MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daskol",
"html_url": "https://github.com/daskol",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://api.github.com/users/daskol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daskol/subscriptions",
"organizations_url": "https://api.github.com/users/daskol/orgs",
"repos_url": "https://api.github.com/users/daskol/repos",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"received_events_url": "https://api.github.com/users/daskol/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It turns out that this is not a function but a property which do some complex initialization. :shrug:\r\n"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
transformers from `v4.3.3` to `v4.25.1`.
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just stare at [this line][1] (method is not actually called).
```python
class Trainer:
def __init__(self, ...)
...
# force device and distributed setup init explicitly
args._setup_devices
...
```
This change was done in 2021-02-11 (almost two years ago).
[1]: https://github.com/huggingface/transformers/blob/4e730b387364c9f46b6b1b0c79fdaf0903c42257/src/transformers/trainer.py#L329
### Expected behavior
I do not know what to expect because of the issue. May be all distributed (at least parallel) training with PyTorch is broken. May be everything is fine. I am totally not sure. I'd like to see some regression tests or something that prove that there is not issue or something what was broken after this change.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21238/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21237
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21237/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21237/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21237/events
|
https://github.com/huggingface/transformers/pull/21237
| 1,551,830,527
|
PR_kwDOCUB6oc5IQ3Hx
| 21,237
|
Add support of backward_prefetch and forward_prefetch
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger Done, But not sure why this test is failing. Any pointers on how to make this build green would help.",
"@sgugger @pacman100 Need pointer on why this test is failing.",
"> The test is a flaky one, don't worry about it. Thanks for iterating, I just have one last comment on the deprecation warning for `fsdp_min_num_params` and we can merge this!\r\n\r\nDone\r\n",
"@sgugger @pacman100 Can we merge\r\n this PR ? ",
"?Hello @raghavanone , could you please resolve the comments above that I have unresolved as they are yet to be addressed ?",
"> ?Hello @raghavanone , could you please resolve the comments above that I have unresolved as they are yet to be addressed ?\r\n\r\nDone",
"Thank you @raghavanone for iterating and addressing the comments and for the overall contribution! 🚀 "
] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
#21156
Adds support for backward_prefetch and forward_prefetch in trainer.
@sgugger @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21237/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21237",
"html_url": "https://github.com/huggingface/transformers/pull/21237",
"diff_url": "https://github.com/huggingface/transformers/pull/21237.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21237.patch",
"merged_at": 1675176696000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21236
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21236/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21236/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21236/events
|
https://github.com/huggingface/transformers/pull/21236
| 1,551,813,927
|
PR_kwDOCUB6oc5IQ0Ba
| 21,236
|
Optimize by not computing gradients for parameters set to requires_grad=False
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21236). All of your documentation changes will be reflected on that endpoint.",
"@sgugger Need to retrigger this build .\r\n"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Fix #21182
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21236/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21236",
"html_url": "https://github.com/huggingface/transformers/pull/21236",
"diff_url": "https://github.com/huggingface/transformers/pull/21236.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21236.patch",
"merged_at": 1674484080000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21235
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21235/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21235/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21235/events
|
https://github.com/huggingface/transformers/pull/21235
| 1,551,795,798
|
PR_kwDOCUB6oc5IQwqm
| 21,235
|
WIP porting of lite transformer
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@NielsRogge Needs some help how to go about the conversion script and testing. \r\n\r\nThe original model is not in pytorch hub, I has only Google Drive link. In the conversion script should I download and convert ? ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21235). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @raghavanone, do you still want to proceed with this PR? If yes, I'll reopen it :) ",
"@NielsRogge Yes, Please it keep it open, I want to wrap up UDOP PR beforing wraping this up . "
] | 1,674
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
#19730
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21235/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21235",
"html_url": "https://github.com/huggingface/transformers/pull/21235",
"diff_url": "https://github.com/huggingface/transformers/pull/21235.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21235.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21234
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21234/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21234/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21234/events
|
https://github.com/huggingface/transformers/pull/21234
| 1,551,794,547
|
PR_kwDOCUB6oc5IQwbT
| 21,234
|
Speed up `BeamScorer` by 1000%
|
{
"login": "fzyzcjy",
"id": 5236035,
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fzyzcjy",
"html_url": "https://github.com/fzyzcjy",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I am really unfamiliar with huggingface CI, it errors:\r\n\r\n```\r\nFrom github.com:huggingface/transformers\r\n * [new ref] refs/pull/21234/head -> origin/pull/21234\r\nChecking out branch\r\nfatal: reference is not a tree: 9fb13c79a72d13cdf0dd59d48762cd7c95370b29\r\n\r\nexit status 128\r\n```\r\n\r\nHowever, looking at https://github.com/huggingface/transformers/pull/21234/files, seems I only change one file.\r\n\r\nDo not think I can fix this :/\r\n\r\n\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21234). All of your documentation changes will be reflected on that endpoint.",
"> Before addressing the comments, let's first make sure this change is worth merging :) We won't accept PRs that make the code harder to read (as most vectorized versions of an algorithm are) unless there are clear benefits.\r\nI will need execution time numbers of .generate() from a model at least as big as gpt2, before and after this change, for several number of beams (e.g. 2, 4, 8, and 16). Ideally with and without GPU.\r\n\r\nTotally understand your concerns :) I do not have much time now (you know, doing research and maintaining [my open source libs](https://github.com/fzyzcjy)), but will try to squeeze out some time when possible. Anyway, the PR in its current status may already be somehow useful for users who finds out it is too slow, since they can manually copy and tweak the `generate` function to use a custom scorer.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This speedup is indeed useful.",
"@KexinFeng Thanks!",
"FYI we will be introducing fixed-sized caches in text generation soon (akin to our TF and JAX implementations), which will also imply a refactor (vectorization) of beam methods :)"
] | 1,674
| 1,697
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20820
For reasons, explanations, benchmarks, etc, please have a look at the issue
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21234/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21234",
"html_url": "https://github.com/huggingface/transformers/pull/21234",
"diff_url": "https://github.com/huggingface/transformers/pull/21234.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21234.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21233
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21233/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21233/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21233/events
|
https://github.com/huggingface/transformers/issues/21233
| 1,551,782,524
|
I_kwDOCUB6oc5cflJ8
| 21,233
|
IndexError: index out of range in self during ViltForImagesAndTextClassification fine-tuning
|
{
"login": "shantanu778",
"id": 25875992,
"node_id": "MDQ6VXNlcjI1ODc1OTky",
"avatar_url": "https://avatars.githubusercontent.com/u/25875992?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shantanu778",
"html_url": "https://github.com/shantanu778",
"followers_url": "https://api.github.com/users/shantanu778/followers",
"following_url": "https://api.github.com/users/shantanu778/following{/other_user}",
"gists_url": "https://api.github.com/users/shantanu778/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shantanu778/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shantanu778/subscriptions",
"organizations_url": "https://api.github.com/users/shantanu778/orgs",
"repos_url": "https://api.github.com/users/shantanu778/repos",
"events_url": "https://api.github.com/users/shantanu778/events{/privacy}",
"received_events_url": "https://api.github.com/users/shantanu778/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @NielsRogge and @alaradirik ",
"Hi @shantanu778, your input shapes seem correct but could you provide a minimal code example that reproduces the error?",
"As you can see the error is in the forward function. I actually didn't changed a lot in ViltForImagesAndTextClassification class. Here is my CustomModel:\r\n\r\n```\r\nclass CustomModel(PreTrainedModel):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n\r\n # print(config)\r\n self.num_labels = config.num_labels\r\n self.vilt = ViltModel(config)\r\n\r\n # Classifier head\r\n num_images = config.num_images\r\n self.classifier = nn.Linear(config.hidden_size * num_images, config.num_labels)\r\n\r\n\r\n def forward(\r\n self,\r\n input_ids = None,\r\n attention_mask = None,\r\n token_type_ids = None,\r\n pixel_values = None,\r\n pixel_mask = None,\r\n head_mask = None,\r\n inputs_embeds = None,\r\n image_embeds = None,\r\n labels = None,\r\n output_attentions = None,\r\n output_hidden_states = None,\r\n return_dict = None,\r\n ):\r\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\r\n output_hidden_states = (\r\n output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\r\n )\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n # print(input_ids)\r\n # print(pixel_values.size())\r\n if pixel_values is not None and pixel_values.ndim == 4:\r\n # add dummy num_images dimension\r\n pixel_values = pixel_values.unsqueeze(1)\r\n\r\n if image_embeds is not None and image_embeds.ndim == 3:\r\n # add dummy num_images dimension\r\n image_embeds = image_embeds.unsqueeze(1)\r\n\r\n num_images = pixel_values.shape[1] if pixel_values is not None else None\r\n # print(num_images)\r\n if num_images is None:\r\n num_images = image_embeds.shape[1] if image_embeds is not None else None\r\n if num_images != self.config.num_images:\r\n raise ValueError(\r\n \"Make sure to match the number of images in the model with the number of images in the input.\"\r\n )\r\n pooler_outputs = []\r\n hidden_states = [] if output_hidden_states else None\r\n attentions = [] if output_attentions else None\r\n for i in range(num_images):\r\n # print(i)\r\n # print(input_ids)\r\n # print(pixel_values[:, i, :, :, :])\r\n \r\n # forward every image through the model\r\n outputs = self.vilt(\r\n input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n pixel_values=pixel_values[:, i, :, :, :] if pixel_values is not None else None,\r\n pixel_mask=pixel_mask[:, i, :, :] if pixel_mask is not None else None,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n image_embeds=image_embeds[:, i, :, :] if image_embeds is not None else None,\r\n image_token_type_idx=i+1,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n # print(\"=\"*20)\r\n # print(outputs)\r\n pooler_output = outputs.pooler_output if return_dict else outputs[1]\r\n # print(\"=\"*20)\r\n # print(pooler_output)\r\n pooler_outputs.append(pooler_output)\r\n if output_hidden_states:\r\n hidden_states.append(outputs.hidden_states)\r\n if output_attentions:\r\n attentions.append(outputs.attentions)\r\n\r\n pooled_output = torch.cat(pooler_outputs, dim=-1)\r\n logits = self.classifier(pooled_output)\r\n\r\n loss = None\r\n if labels is not None:\r\n loss_fct = nn.CrossEntropyLoss()\r\n # print(labels)\r\n loss = loss_fct(logits.view(-1, self.num_labels), labels)\r\n\r\n if not return_dict:\r\n output = (logits, hidden_states, attentions)\r\n return ((loss,) + output) if loss is not None else output\r\n\r\n return ViltForImagesAndTextClassificationOutput(\r\n loss=loss,\r\n logits=logits,\r\n hidden_states=hidden_states,\r\n attentions=attentions,\r\n )\r\n```\r\n\r\nI don't know where is the exact problem. But after passing **image_token_type_idx= 1**, I didn't get any error.",
"Hi @shantanu778 could you provide a complete example, including the toy inputs, batch generation and the forward pass so that we can replicate the error?\r\n\r\nAre you trying to customize the model or is the CustomModel class is just meant to fix an existing issue?",
"@alaradirik First of all, CustomModel is mainly meant to fix an existing issue. Because when I tried to fine-tune ViltForImagesAndTextClassification, I got above error. Then, I created customModel class as like as [your source code](https://github.com/huggingface/transformers/blob/v4.26.0/src/transformers/models/vilt/modeling_vilt.py#L1281) and fix the issue by editing ** image_token_type_idx** in forward function. But I am not sure is it right or wrong way to fix it. \r\n\r\nNow I am trying to Describe my task,\r\nI have a text and 10 images and I have to find the correct image from the 10 images. I wanted solve this problem as Multi-label Classification.\r\n\r\n*Dataset*\r\ntext | images | gold_image\r\n -- | -- | --\r\ngangster outlaw |['image.166.jpg','image.173.jpg', 'image.172.jpg','image.165.jpg', 'image.174.jpg','image.170.jpg','image.171.jpg', 'image.167.jpg'image.168.jpg','image.169.jpg']| 'image.165.jpg'\r\n \r\n*Custom Dataset*\r\n```\r\nclass ImageTextDataset(Dataset):\r\n def __init__(self, data_dir, train_df, data_type, device, text_augmentation=False):\r\n self.data_type = data_type\r\n self.transforms = transforms.Compose([transforms.Resize([512,512]),transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\r\n self.data_dir = data_dir\r\n if self.data_type == \"train\" or self.data_type == \"valid\":\r\n self.all_image_names = list(train_df['images'])\r\n self.context = list(train_df['text'])\r\n self.gold_images = list(train_df['gold_image'])\r\n\r\n else:\r\n raise ValueError(\"Invalid data type. Expected one of: %s\" % self.data_type)\r\n\r\n def __len__(self):\r\n return len(self.context)\r\n\r\n def __getitem__(self, idx):\r\n # Load the image and text\r\n context = self.context[idx]\r\n #loading images\r\n if self.data_type=='train' or self.data_type == 'valid':\r\n label = []\r\n images = self.all_image_names[idx]\r\n image = []\r\n for i, im in enumerate(images):\r\n path = os.path.join(self.data_dir, im)\r\n img = Image.open(path)\r\n if img.mode != \"RGB\":\r\n img = img.convert('RGB')\r\n img = self.transforms(img)\r\n image.append(img)\r\n label.append(1.0) if im == self.gold_images[idx] else label.append(0.0)\r\n\r\n sample = {'context':context, 'images': image, 'label': label}\r\n \r\n else:\r\n raise ValueError(\"Invalid data type. Expected one of: %s\" % self.data_type)\r\n return sample\r\n```\r\n\r\n*Custom Data collator Function*\r\n```\r\ndef custom_collate(batch, processor):\r\n tokenizer = processor['tokenizer']\r\n feature_extractor = processor['feature_extractor']\r\n dic = {}\r\n context = []\r\n images = []\r\n labels = []\r\n for item in batch:\r\n context.append(item['context'])\r\n images.append(item['images'])\r\n labels.append(item['label'])\r\n\r\n pixel_masks, pixel_values= [], [],\r\n for idx, s in enumerate(images):\r\n # print(s)\r\n pixel_mask, pixel_value, label = [], [], []\r\n for jdx, img in enumerate(s):\r\n # print(img.size())\r\n # print(img.size())\r\n feature_encoding = feature_extractor(img, return_tensors=\"pt\")\r\n pixel_mask.append(feature_encoding['pixel_mask'].squeeze(0))\r\n pixel_value.append(feature_encoding['pixel_values'].squeeze(0))\r\n pixel_mask = torch.stack(pixel_mask)\r\n pixel_value = torch.stack(pixel_value)\r\n\r\n pixel_masks.append(pixel_mask)\r\n pixel_values.append(pixel_value)\r\n\r\n encoding = tokenizer(context, return_tensors=\"pt\", padding=True ,truncation=True, max_length=40)\r\n encoding['pixel_values'] = torch.stack(pixel_values)\r\n encoding['pixel_mask'] = torch.stack(pixel_masks)\r\n encoding['labels'] = torch.as_tensor(labels)\r\n return encoding\r\n```\r\n*Training Script*\r\n\r\n```\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\ncheckpoint = \"dandelin/vilt-b32-finetuned-coco\"\r\ntokenizer = BertTokenizerFast.from_pretrained(\"bert-base-uncased\")\r\nfeature_extractor = ViltFeatureExtractor.from_pretrained(checkpoint)\r\nprocessor = {\r\n 'tokenizer': tokenizer,\r\n 'feature_extractor': feature_extractor\r\n}\r\nmodel=CustomModel(config = ViltConfig.from_pretrained(checkpoint, output_attentions=True,output_hidden_states=True, num_images=10, num_labels=10, problem_type=\"multi_label_classification\"))\r\nmodel.to(device)\r\nprint(model.config.architectures[0])\r\n# Create the dataset\r\ntrain_ds = ImageTextDataset('/train_images_v1', train, data_type=\"train\",device = device, text_augmentation=True)\r\n# Create the dataloader\r\ntrain_dataloader = DataLoader(train_ds, shuffle=True, batch_size=6, collate_fn=lambda batch: custom_collate(batch, processor))\r\nprint(len(train_dataloader))\r\n# model.to(device)\r\nlr = 5e-5\r\noptimizer = AdamW(model.parameters(), lr=lr)\r\nnum_epochs = 2\r\nnum_training_steps = num_epochs * len(train_dataloader)\r\nprogress_bar_train = tqdm(range(num_training_steps))\r\nlr_scheduler = get_scheduler(\r\n \"linear\",\r\n optimizer=optimizer,\r\n num_warmup_steps=0,\r\n num_training_steps=num_training_steps,\r\n)\r\nprint(num_training_steps)\r\nfor i in range(num_epochs):\r\n total_loss = 0\r\n print(f\"Epoch {i+1}\")\r\n model.train()\r\n for batch in train_dataloader:\r\n batch.to(device)\r\n outputs = model(input_ids=batch['input_ids'], pixel_values=batch['pixel_values'], labels=batch['labels'])\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n lr_scheduler.step()\r\n optimizer.zero_grad()\r\n progress_bar_train.update(1)\r\n```\r\nNow if you use ViltForImagesAndTextClassification for fine-tuning, you will encounter the error. Then if you use my CustomModel in my previous comment, it will solve the issue. \r\n\r\nN:B: I never created issue before therefore I don't know the proper way to explain the problem and task. Sorry for your inconvenience. \r\n",
"Hi @shantanu778, could you provide a minimal code example that reproduces the error without the custom class?\r\n\r\n",
"I don't know how to give u minimal code example,\r\nI describe before what I wanted to do.\r\nif u try to fine-tune ViltForImagesAndTextClassification with 10 images instead of 2, I think you will able to generate the error.\r\nIn my case, instead of using CustomClass use ViltForImagesAndTextClassification rest of them are as like as I mentioned earlier. @alaradirik ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"A simple solution:set modality_type_vocab_size = num_images+1"
] | 1,674
| 1,685
| 1,678
|
NONE
| null |
### System Info
I am running on google colab. Though I got the same error for GPU. Here I am showing without GPU information
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Datasets
text | images |
-- | -- |
moorhen swamphen | [image1.jpg, image2.jpg, image3.jpg, image4.jpg, image5.jpg, image6.jpg, image7.jpg, image8.jpg, image9.jpg, image10.jpg]|
-- | -- |
According to the dataset, I have to pass 1 text with 10 images. So my input shape:
```
pixel_values: torch.Size([6, 10, 3, 384, 384])
pixel_mask: torch.Size([6, 10, 384, 384])
Input_ids: torch.Size([6, 9])
```
According to the forward function of [ViltForImagesAndTextClassification](https://github.com/huggingface/transformers/blob/v4.25.1/src/transformers/models/vilt/modeling_vilt.py#L1281) I can pass **num_images** while calling the model.
But During training the model, showing the following error:
```
IndexError Traceback (most recent call last)
[<ipython-input-27-191138835385>](https://localhost:8080/#) in <module>
70 # encoding = base_processor(images, batch[1], return_tensors="pt")
71
---> 72 outputs = model(input_ids=batch['input_ids'], pixel_values=batch['pixel_values'], labels=batch['labels'])
73
74 # print(outputs)
8 frames
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
[<ipython-input-23-da8fb21f3dcd>](https://localhost:8080/#) in forward(self, input_ids, attention_mask, token_type_ids, pixel_values, pixel_mask, head_mask, inputs_embeds, image_embeds, labels, output_attentions, output_hidden_states, return_dict)
64
65 # forward every image through the model
---> 66 outputs = self.vilt(
67 input_ids,
68 attention_mask=attention_mask,
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.8/dist-packages/transformers/models/vilt/modeling_vilt.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, token_type_ids, pixel_values, pixel_mask, head_mask, inputs_embeds, image_embeds, image_token_type_idx, output_attentions, output_hidden_states, return_dict)
836 head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
837
--> 838 embedding_output, attention_mask = self.embeddings(
839 input_ids,
840 attention_mask,
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.8/dist-packages/transformers/models/vilt/modeling_vilt.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, token_type_ids, pixel_values, pixel_mask, inputs_embeds, image_embeds, image_token_type_idx)
231 torch.zeros_like(attention_mask, dtype=torch.long, device=text_embeds.device)
232 )
--> 233 image_embeds = image_embeds + self.token_type_embeddings(
234 torch.full_like(image_masks, image_token_type_idx, dtype=torch.long, device=text_embeds.device)
235 )
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/sparse.py](https://localhost:8080/#) in forward(self, input)
158
159 def forward(self, input: Tensor) -> Tensor:
--> 160 return F.embedding(
161 input, self.weight, self.padding_idx, self.max_norm,
162 self.norm_type, self.scale_grad_by_freq, self.sparse)
[/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2208 # remove once script supports set_grad_enabled
2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2211
2212
IndexError: index out of range in self
```
But while I am changing the value of **image_token_type_idx= i + 1** to **image_token_type_idx=1** in forward function during passing the images in the vilt model in the following snippet, it is working fine.
```
for i in range(num_images):
# forward every image through the model
outputs = self.vilt(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
pixel_values=pixel_values[:, i, :, :, :] if pixel_values is not None else None,
pixel_mask=pixel_mask[:, i, :, :] if pixel_mask is not None else None,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
image_embeds=image_embeds[:, i, :, :] if image_embeds is not None else None,
image_token_type_idx=i + 1,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
```
### Expected behavior
According to the documentation, there should not be any problem.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21233/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21232
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21232/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21232/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21232/events
|
https://github.com/huggingface/transformers/pull/21232
| 1,551,744,850
|
PR_kwDOCUB6oc5IQnWH
| 21,232
|
[Mask2Former] Add doc tests
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh not sure why CI is failing, running `make fixup` locally doesn't result in any updates. My version is black 22.3",
"@NielsRogge mine is also 22.03, but it reformates the modeling file. Not sure why though, do you want me to push? I can also post the whole content of `pip freeze` for you to check the package versions.",
"Feel free to push a commit :)",
"I pushed a commit. Actually, you are right. `make fixup` will change the files twice in the run, and that 2 changes cancel each other's change. I am not sure why. After running `make style` to fix some issues, it then works for `make fixup` too."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR ensures that the code snippet's in Mask2Former's docs work as intended, and are tested.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21232/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21232",
"html_url": "https://github.com/huggingface/transformers/pull/21232",
"diff_url": "https://github.com/huggingface/transformers/pull/21232.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21232.patch",
"merged_at": 1674646484000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21231
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21231/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21231/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21231/events
|
https://github.com/huggingface/transformers/issues/21231
| 1,551,735,414
|
I_kwDOCUB6oc5cfZp2
| 21,231
|
how to fine tune BlipForImageTextRetrieval?
|
{
"login": "ScottishFold007",
"id": 36957508,
"node_id": "MDQ6VXNlcjM2OTU3NTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/36957508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ScottishFold007",
"html_url": "https://github.com/ScottishFold007",
"followers_url": "https://api.github.com/users/ScottishFold007/followers",
"following_url": "https://api.github.com/users/ScottishFold007/following{/other_user}",
"gists_url": "https://api.github.com/users/ScottishFold007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ScottishFold007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ScottishFold007/subscriptions",
"organizations_url": "https://api.github.com/users/ScottishFold007/orgs",
"repos_url": "https://api.github.com/users/ScottishFold007/repos",
"events_url": "https://api.github.com/users/ScottishFold007/events{/privacy}",
"received_events_url": "https://api.github.com/users/ScottishFold007/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @younesbelkada ",
"I'd recommend fine-tuning CLIP if you want to do image-text retrieval using this script: https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text.\r\n\r\nFine-tuning BLIP might be harder as it involves some very specific loss functions.",
"Thank you for your answer! I tried the fintuning of clip, and it was successful. I use the blip model because I want to use its text matching (binary classification model) to filter out noisy data that is not matched by the text diagram. Because I collect a large number of unlabeled pictures from the Internet, I want to use the blip caption model to tag them, and then filter the invalid image data.",
"CLIP can also be used for image-text matching, by just encoding the image, encoding the text, and computing a cosine similarity score between the respective embeddings.",
"> CLIP can also be used for image-text matching, by just encoding the image, encoding the text, and computing a cosine similarity score between the respective embeddings.\r\n\r\nIn fact, what I want to express is that this graphical text matching classifier is similar to the cross encoder in text matching, which can get the interaction information between the two by splicing the graphical embedding, so the accuracy will be higher than the clip (similar to the bi-encoder in text matching)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### Feature request
how to fine tune BlipForImageTextRetrieval?
Can you borrow some methods from here to achieve this?
https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip_models/blip_retrieval.py
### Motivation
Implement a graphical matching model that, due to the filtering of poor quality pairs of matches
### Your contribution
Not available at the moment
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21231/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21230
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21230/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21230/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21230/events
|
https://github.com/huggingface/transformers/issues/21230
| 1,551,727,674
|
I_kwDOCUB6oc5cfXw6
| 21,230
|
when adding tokens for BlipModel,A bug has appeared
|
{
"login": "ScottishFold007",
"id": 36957508,
"node_id": "MDQ6VXNlcjM2OTU3NTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/36957508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ScottishFold007",
"html_url": "https://github.com/ScottishFold007",
"followers_url": "https://api.github.com/users/ScottishFold007/followers",
"following_url": "https://api.github.com/users/ScottishFold007/following{/other_user}",
"gists_url": "https://api.github.com/users/ScottishFold007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ScottishFold007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ScottishFold007/subscriptions",
"organizations_url": "https://api.github.com/users/ScottishFold007/orgs",
"repos_url": "https://api.github.com/users/ScottishFold007/repos",
"events_url": "https://api.github.com/users/ScottishFold007/events{/privacy}",
"received_events_url": "https://api.github.com/users/ScottishFold007/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @ScottishFold007 \r\nThanks for the issue, I don't really see how the script you provided can add a new vocab to the model, can you either provide the full script or the full traceback of the error? Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
When I execute the following code to add a vocab, an error is reported -
`
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
import sys
import logging
import pandas as pd
from dataclasses import dataclass, field
from typing import Optional
import torch
from datasets import Dataset
from datasets import load_dataset
from PIL import Image
from torchvision.io import ImageReadMode, read_image
from torchvision.transforms import CenterCrop, ConvertImageDtype, Normalize, Resize
from torchvision.transforms.functional import InterpolationMode
from transformers import BlipModel, BlipForImageTextRetrieval, BlipForConditionalGeneration, BlipProcessor, AutoConfig, AutoTokenizer
import transformers
from transformers import (
HfArgumentParser,
Trainer,
TrainingArguments,
set_seed,
)
model_path= r'D:\all_models_archives\models--Salesforce--blip-itm-large-coco'
tokenizer= AutoTokenizer.from_pretrained(model_path)
processor = BlipProcessor.from_pretrained(model_path)
model_config= AutoConfig.from_pretrained(model_path)
model= BlipModel.from_pretrained(pretrained_model_name_or_path= model_path, config= model_config)
`

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
import sys
import logging
import pandas as pd
from dataclasses import dataclass, field
from typing import Optional
import torch
from datasets import Dataset
from datasets import load_dataset
from PIL import Image
from torchvision.io import ImageReadMode, read_image
from torchvision.transforms import CenterCrop, ConvertImageDtype, Normalize, Resize
from torchvision.transforms.functional import InterpolationMode
from transformers import BlipModel, BlipForImageTextRetrieval, BlipForConditionalGeneration, BlipProcessor, AutoConfig, AutoTokenizer
import transformers
from transformers import (
HfArgumentParser,
Trainer,
TrainingArguments,
set_seed,
)
model_path= r'D:\all_models_archives\models--Salesforce--blip-itm-large-coco'
tokenizer= AutoTokenizer.from_pretrained(model_path)
processor = BlipProcessor.from_pretrained(model_path)
model_config= AutoConfig.from_pretrained(model_path)
model= BlipModel.from_pretrained(pretrained_model_name_or_path= model_path, config= model_config)
`

### Expected behavior
Adding vocabulary can be done normally
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21230/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21229
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21229/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21229/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21229/events
|
https://github.com/huggingface/transformers/pull/21229
| 1,551,703,898
|
PR_kwDOCUB6oc5IQf50
| 21,229
|
Add scikit-learn dependency to train langage-modeling
|
{
"login": "mostafaelhoushi",
"id": 1451293,
"node_id": "MDQ6VXNlcjE0NTEyOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1451293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mostafaelhoushi",
"html_url": "https://github.com/mostafaelhoushi",
"followers_url": "https://api.github.com/users/mostafaelhoushi/followers",
"following_url": "https://api.github.com/users/mostafaelhoushi/following{/other_user}",
"gists_url": "https://api.github.com/users/mostafaelhoushi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mostafaelhoushi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mostafaelhoushi/subscriptions",
"organizations_url": "https://api.github.com/users/mostafaelhoushi/orgs",
"repos_url": "https://api.github.com/users/mostafaelhoushi/repos",
"events_url": "https://api.github.com/users/mostafaelhoushi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mostafaelhoushi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
In order to run the language modeling training script, we need `scikit-learn` to be installed, so this PR adds it to the requirements.txt
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21229/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21229",
"html_url": "https://github.com/huggingface/transformers/pull/21229",
"diff_url": "https://github.com/huggingface/transformers/pull/21229.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21229.patch",
"merged_at": 1674485686000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21228
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21228/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21228/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21228/events
|
https://github.com/huggingface/transformers/issues/21228
| 1,551,698,837
|
I_kwDOCUB6oc5cfQuV
| 21,228
|
Issue Importing Image Resolution Models
|
{
"login": "pravin-santhanam27",
"id": 40184835,
"node_id": "MDQ6VXNlcjQwMTg0ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/40184835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pravin-santhanam27",
"html_url": "https://github.com/pravin-santhanam27",
"followers_url": "https://api.github.com/users/pravin-santhanam27/followers",
"following_url": "https://api.github.com/users/pravin-santhanam27/following{/other_user}",
"gists_url": "https://api.github.com/users/pravin-santhanam27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pravin-santhanam27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pravin-santhanam27/subscriptions",
"organizations_url": "https://api.github.com/users/pravin-santhanam27/orgs",
"repos_url": "https://api.github.com/users/pravin-santhanam27/repos",
"events_url": "https://api.github.com/users/pravin-santhanam27/events{/privacy}",
"received_events_url": "https://api.github.com/users/pravin-santhanam27/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @pravin-santhanam27 there seems to be a problem loading `Swin2SRForImageSuperResolution` with stable transformers(4.25.1) which we install from pypi. But this error is not present if you install from the source(4.26.0.dev0). This error is fixed in the source version(which is regularly updated) and will also be updated to stable release later. If you want to use it right now then please install `transformers` from source - `pip install git+https://github.com/huggingface/transformers`",
"Will close this as the issue seems resolved."
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
Hey Everyone,
I am tying to import a model from transformers for deblurring images.
I am on Python 3.10.9 and just install transformers 4.25.1
The error comes on import
`from transformers import AutoImageProcessor, Swin2SRForImageSuperResolution`
and the error is:
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'Swin2SRForImageSuperResolution' from 'transformers' (\env\lib\site-packages\transformers\__init__.py)`
These are the packages currently installed in my virtual environment
`certifi==2022.12.7
charset-normalizer==3.0.1
colorama==0.4.6
filelock==3.9.0
huggingface-hub==0.11.1
idna==3.4
numpy==1.24.1
opencv-python==4.7.0.68
packaging==23.0
Pillow==9.4.0
PyYAML==6.0
regex==2022.10.31
requests==2.28.2
tokenizers==0.13.2
torch==1.13.1+cu117
torchaudio==0.13.1+cu117
torchvision==0.14.1+cu117
tqdm==4.64.1
transformers==4.25.1
typing_extensions==4.4.0
urllib3==1.26.14`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21228/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21227
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21227/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21227/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21227/events
|
https://github.com/huggingface/transformers/pull/21227
| 1,551,650,838
|
PR_kwDOCUB6oc5IQVUT
| 21,227
|
[WIP] Support BLIP and GIT in image-to-text and VQA pipelines
|
{
"login": "atturaioe",
"id": 76523524,
"node_id": "MDQ6VXNlcjc2NTIzNTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/76523524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atturaioe",
"html_url": "https://github.com/atturaioe",
"followers_url": "https://api.github.com/users/atturaioe/followers",
"following_url": "https://api.github.com/users/atturaioe/following{/other_user}",
"gists_url": "https://api.github.com/users/atturaioe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atturaioe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atturaioe/subscriptions",
"organizations_url": "https://api.github.com/users/atturaioe/orgs",
"repos_url": "https://api.github.com/users/atturaioe/repos",
"events_url": "https://api.github.com/users/atturaioe/events{/privacy}",
"received_events_url": "https://api.github.com/users/atturaioe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21227). All of your documentation changes will be reflected on that endpoint.",
"Hi @NielsRogge, should I remove the return of the topk scores in the VQA pipeline that used ViltForQuestionAnswering only?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Support BLIP and GIT models in image-to-text and VQA pipelines.
Fixes #21110
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21227/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21227",
"html_url": "https://github.com/huggingface/transformers/pull/21227",
"diff_url": "https://github.com/huggingface/transformers/pull/21227.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21227.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21226
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21226/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21226/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21226/events
|
https://github.com/huggingface/transformers/pull/21226
| 1,551,650,533
|
PR_kwDOCUB6oc5IQVQU
| 21,226
|
Skip failing test for now
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21226). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
All is said in the title. Test is currently failing on main for no reason (I imagine a new release of one of the deps), more can be found [here](https://app.circleci.com/pipelines/github/huggingface/transformers/55784/workflows/83b929a9-3d0d-482a-a823-806f44824bf8/jobs/673138).
cc @sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21226/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21226",
"html_url": "https://github.com/huggingface/transformers/pull/21226",
"diff_url": "https://github.com/huggingface/transformers/pull/21226.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21226.patch",
"merged_at": 1674265572000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21225
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21225/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21225/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21225/events
|
https://github.com/huggingface/transformers/pull/21225
| 1,551,508,060
|
PR_kwDOCUB6oc5IP2vF
| 21,225
|
Models docstring
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you @sgugger for cleaning this up. With all ~250 files, I will trust you instead of look lines by lines, except one question below.\r\n\r\nI would definitely prefer to run a doctest first offline before merging this PR - for which I can launch on my side. From previous PRs, it has shown there are always some surprise. I will launch doctest CI when all reviewers give their approval.\r\n\r\n**So here my question**\r\n\r\n> Note that in some cases we can't use the auto-classes for preprocessing: when linking to the __call__ method of a processor or image processor, we need the actual class (cc @amyeroberts I changed a couple of things you did here).\r\n\r\nI see even in such places, we still have\r\n```python\r\nPixel values can be obtained using [`AutoImageProcessor`]. See [`ConvNextImageProcessor.__call__`] for details.\r\n```\r\nI don't have much context and prior knowledge, but is it true we want to use `AutoImageProcessor` but `ConvNextImageProcessor.__call__` in such cases?",
"> With all ~250 files, I will trust you instead of look lines by lines.\r\n\r\nA review would still be much appreciated, as it could catch accidental typos.\r\n\r\n> I would definitely prefer to run a doctest first offline before merging this PR - for which I can launch on my side. From previous PRs, it has shown there are always some surprise. I will launch doctest CI when all reviewers give their approval.\r\n\r\nSure, we can wait for that as long as the results are available before the release branch is cut.\r\n\r\n> I don't have much context and prior knowledge, but is it true we want to use AutoImageProcessor but ConvNextImageProcessor.__call__ in such cases?\r\n\r\nYes.",
"I triggered the doctest CI against the (last) commit (so far) in this PR. Will take a look on the PR changes too :-)\r\n\r\n[run page](https://github.com/huggingface/transformers/actions/runs/3987623228/jobs/6837694181)"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
This PR cleans up all docstrings following up from #20757 and #21199. It removes the need for the `processor_class` in TensorFlow and Flax generic examples by setting in the examples like #20757 did for PyTorch then makes a full pass across all models to clean up the docstrings (removing the processor_class` in the `add_code_sample` decorator, remove random outputs, use the auto classes for preprocessing).
Note that in some cases we can't use the auto-classes for preprocessing: when linking to the `__call__` method of a processor or image processor, we need the actual class (cc @amyeroberts I changed a couple of things you did here).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21225/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21225/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21225",
"html_url": "https://github.com/huggingface/transformers/pull/21225",
"diff_url": "https://github.com/huggingface/transformers/pull/21225.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21225.patch",
"merged_at": 1674502399000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21224
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21224/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21224/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21224/events
|
https://github.com/huggingface/transformers/pull/21224
| 1,551,479,607
|
PR_kwDOCUB6oc5IPw9t
| 21,224
|
[`BLIP`] fix docstring for `BlipTextxxx`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes docstrings for `BlipTextModel` and `BlipTextLMHeadModel` to follow the dostring structure of `transformers` and be rendered properly by the `doc-builder`
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21224/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21224",
"html_url": "https://github.com/huggingface/transformers/pull/21224",
"diff_url": "https://github.com/huggingface/transformers/pull/21224.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21224.patch",
"merged_at": 1674253002000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21223
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21223/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21223/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21223/events
|
https://github.com/huggingface/transformers/pull/21223
| 1,551,450,147
|
PR_kwDOCUB6oc5IPrDL
| 21,223
|
Add: TensorFlow example for semantic segmentation task guide
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
This PR adds a TensorFlow example to the existing [Semantic Segmentation task guide](https://huggingface.co/docs/transformers/main/en/tasks/semantic_segmentation) using the same dataset and fine-tuning steps.
This example supplements the existing guide and can be helpful to those who choose TensorFlow over PyTorch and would like to use Transformers for semantic segmentation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21223/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21223",
"html_url": "https://github.com/huggingface/transformers/pull/21223",
"diff_url": "https://github.com/huggingface/transformers/pull/21223.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21223.patch",
"merged_at": 1674498736000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21222
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21222/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21222/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21222/events
|
https://github.com/huggingface/transformers/pull/21222
| 1,551,304,660
|
PR_kwDOCUB6oc5IPLyW
| 21,222
|
Add WhisperTokenizerFast
|
{
"login": "jonatanklosko",
"id": 17034772,
"node_id": "MDQ6VXNlcjE3MDM0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/17034772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonatanklosko",
"html_url": "https://github.com/jonatanklosko",
"followers_url": "https://api.github.com/users/jonatanklosko/followers",
"following_url": "https://api.github.com/users/jonatanklosko/following{/other_user}",
"gists_url": "https://api.github.com/users/jonatanklosko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonatanklosko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonatanklosko/subscriptions",
"organizations_url": "https://api.github.com/users/jonatanklosko/orgs",
"repos_url": "https://api.github.com/users/jonatanklosko/repos",
"events_url": "https://api.github.com/users/jonatanklosko/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonatanklosko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ArthurZucker thanks for the help! I think now the steps are to update the unknown token in multilangual checkpoints and add `tokenizer.json` to the repos. Let me know if there's anything I can help with :)",
"Feel free to open community PR on the model' (hub) linking to this PR (github) 🚀 ",
"@ArthurZucker sure! I've just created https://huggingface.co/openai/whisper-tiny/discussions/5, let me know if it looks as expected and I will open a matching PR on the other checkpoints too.\r\n\r\nFTR I generated the `tokenizer.json` with:\r\n\r\n```python\r\nimport sys\r\nsys.path.reverse()\r\nsys.path.append(\"/Users/jonatanklosko/git/transformers/src\")\r\nsys.path.reverse()\r\n\r\nfrom transformers import WhisperTokenizerFast\r\n\r\ntokenizer = WhisperTokenizerFast.from_pretrained(\"/Users/jonatanklosko/git/hf/whisper-tiny/\")\r\ntokenizer.save_pretrained(\"/Users/jonatanklosko/git/hf/whisper-tiny/\")\r\n```\r\n\r\nI also updated the unknown token configuration manually.",
"Changing the unknown token in configuration leads to a weird behaviour when loading the slow tokenizer, see an example in the PR. Any ideas why that is?",
"So the issue is that the multilingual tokenizer doesn't have `<|endoftext|>` in the initial vocabulary, so it would need to be added from special tokens map. However, when loading special tokens we have this check:\r\n\r\nhttps://github.com/huggingface/transformers/blob/7119bb052a3f492b9af3afe4f3f13132445eba6e/src/transformers/tokenization_utils.py#L419-L420\r\n\r\nand since `eos_token` and `unk_token` are both `<|endoftext|>`, we end up not adding them to the vocabulary.",
"To address this we would need to add `\"<|endoftext|>\": 50257` to `vocab.json` and remove it from `added_tokens.json`. Note that this is the case in the English checkpoints (except with 50256).\r\n\r\nThe question is if this hurts compatibility; when loading the slow tokenizer both of these files would be used to load the vocabulary, so moving the entry from one to the other should be alright?",
"Yep, I think the idea is to make the multilingual added tokens match the ones that we have for english. I forgot to mention but yes, we have to add `\"<|endoftext|>` to the vocabulary instead of `''`. This should normally do the trick (with also the modification of the content of the unknown token. ",
"Ah, so we should actually replace it, so that `<|endoftext|>` gets the id that currently `\"\"` has, and we keep `\"\"` just to make sure the ids are not shifted at any point?\r\n\r\n```\r\n\"<|endoftext|>\": 50256,\r\n\"\": 50257,\r\n```\r\n\r\nand not:\r\n\r\n```\r\n\"\": 50256,\r\n\"<|endoftext|>\": 50257,\r\n```",
"@ArthurZucker I updated the PR on the checkpoint. I tried the remaining failing tests locally pointing tokenizer to the updated revision and they passed, so I think we are good on this side.",
"Note that the only difference is that originally EOS (`<|endoftext|>`) was 50257 and now it is 50256, not sure if that's something to worry about.",
"The EOS toke id appears multiple times in the `config.json` so we need to adjust it too. Let me know if that's the way to go, or if we should swap them back :)",
"> Note that the only difference is that originally EOS (<|endoftext|>) was 50257 and now it is 50256, not sure if that's something to worry about.\r\n\r\nAh, this can be an issue I think. We have to keep it at 50257! So let's leave `''` in the vocab (it is also in the original repo) and we just need `{\"<|endoftext|>\": 50257}` this to be in the `added_special_tokens`. See [this repo](https://github.com/openai/whisper/tree/main/whisper/assets/multilingual) which contains most of what we need ",
"@ArthurZucker we need `<|endoftext|>` in the `vocab` rather than `added_tokens` as per https://github.com/huggingface/transformers/pull/21222#issuecomment-1401119817.\r\n\r\nNote that this means unknown token changes from 50256 to 50257, but hopefully that's less invasive.",
"Yeah! That's better",
"Ok, so I think the `openai/whisper-tiny` PR ready too, if there's anything else let me know :)",
"I merged your PR on the hub, now let's fix the failing tests! ",
"@ArthurZucker all green!",
"Will ask for a final review from @sgugger ",
"@ArthurZucker it looks like the new failures come from the GenerationConfig missing some attributes, also looking at `openai/whisper-tiny` the `forced_decoder_ids` have a `null` token and don't match what we have in `config.json`.",
"Hey, `null` token is fine! I added that for the refactoring, it allows the model to automatically predict the language",
"OKay the error comes from the `tiny_random_testing` where configuration files are created from the config, and thus don't have any of the parameters related to generation. The `return_timestamps` is set to `True` but it should not if there are not generation config. \r\nFeel free to skip these tests for now, unless @ydshieh you have an alternative solution",
"> OKay the error comes from the `tiny_random_testing` where configuration files are created from the config, and thus don't have any of the parameters related to generation. The `return_timestamps` is set to `True` but it should not if there are not generation config. Feel free to skip these tests for now, unless @ydshieh you have an alternative solution\r\n\r\nThe CI is currently running and I can't see which test you are mentioning. I will check later once the CI results is available.",
"PRs for other checkpoints:\r\n\r\n* [whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en/discussions/10)\r\n* [whisper-small.en](https://huggingface.co/openai/whisper-small.en/discussions/6)\r\n* [whisper-base.en](https://huggingface.co/openai/whisper-base.en/discussions/5)\r\n* [whisper-medium.en](https://huggingface.co/openai/whisper-medium.en/discussions/5)\r\n* [whisper-small](https://huggingface.co/openai/whisper-small/discussions/11)\r\n* [whisper-base](https://huggingface.co/openai/whisper-base/discussions/7)\r\n* [whisper-medium](https://huggingface.co/openai/whisper-medium/discussions/7)\r\n* [whisper-large](https://huggingface.co/openai/whisper-large/discussions/20)",
"Hey @ydshieh, the tests are aforementioned tests are not skipped, but you can see the previous CI failure [here](https://app.circleci.com/pipelines/github/huggingface/transformers/56124/workflows/d31aa74b-175d-4c79-a237-cd342ded9900/jobs/677380).",
"Hi, @jonatanklosko could you rebase on main branch? You will need to resolve the conflicts. Let me know if you need help on this. Sorry for being late here.",
"@jonatanklosko Thank you. I will take a look on Monday if the pipeline testing is still failing!",
"@ydaigo perfect, thanks :)",
"Hey @jonatanklosko can you rebase on main to or resolve the merge conflicts?",
"@ArthurZucker done and everything passes now :)"
] | 1,674
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
Adds the fast version of Whisper tokenizer. The Whisper tokenizer is essentially GPT2 tokenizer with special tokens. The main difference is the additional normalizer (which I mirrored from the slow tokenizer) and language/task-dependent prefix tokens.
One of the tokenizer tests is failing, it's because there is no `tokenizer.json` file in the `openai/whisper-*` (specifically the `tiny` checkpoint). I added a converter, so now it is possible to load fast tokenizer from existing checkpoints and export `tokenizer.json`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21222/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21222",
"html_url": "https://github.com/huggingface/transformers/pull/21222",
"diff_url": "https://github.com/huggingface/transformers/pull/21222.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21222.patch",
"merged_at": 1676959135000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21221
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21221/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21221/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21221/events
|
https://github.com/huggingface/transformers/issues/21221
| 1,551,278,859
|
I_kwDOCUB6oc5cdqML
| 21,221
|
MobileViT
|
{
"login": "ludmila3",
"id": 67962373,
"node_id": "MDQ6VXNlcjY3OTYyMzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/67962373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ludmila3",
"html_url": "https://github.com/ludmila3",
"followers_url": "https://api.github.com/users/ludmila3/followers",
"following_url": "https://api.github.com/users/ludmila3/following{/other_user}",
"gists_url": "https://api.github.com/users/ludmila3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ludmila3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ludmila3/subscriptions",
"organizations_url": "https://api.github.com/users/ludmila3/orgs",
"repos_url": "https://api.github.com/users/ludmila3/repos",
"events_url": "https://api.github.com/users/ludmila3/events{/privacy}",
"received_events_url": "https://api.github.com/users/ludmila3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Answered here: https://github.com/NielsRogge/Transformers-Tutorials/issues/241"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.15.0-1027-gcp-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@amyeroberts and @NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1.Run run_image_classification.py with MobileViT models (mobilevit-x-small)
### Expected behavior
Breaks first on Normalize function
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21221/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21220
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21220/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21220/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21220/events
|
https://github.com/huggingface/transformers/issues/21220
| 1,551,224,224
|
I_kwDOCUB6oc5cdc2g
| 21,220
|
[Generation Config] General issues
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"1. Agree - I think we should/could allow this functionality.\r\n2. Here I don't think we need to change anything. It's quite intuitive for me that the \"[from_model_config](https://github.com/huggingface/transformers/blob/4e730b387364c9f46b6b1b0c79fdaf0903c42257/src/transformers/generation/configuration_utils.py#L620)\" API has to load a config type object\r\n3. I don't fully understand this - could you add an example? \r\n4. Could you maybe send a link to where the warning is thrown? Would make it easier to understand what logic it's talking about ",
"3. An example would be the following: \r\n```python \r\nclass WhisperForConditionalGeneration:\r\n ...\r\n # redefine generate with custom kwargs like `task`, `return_timestamps` and `is_multilingual` \r\n def generate(\r\n self,\r\n inputs: Optional[torch.Tensor] = None,\r\n generation_config= None,\r\n logits_processor = None,\r\n stopping_criteria = None,\r\n prefix_allowed_tokens_fn = None,\r\n synced_gpus = False,\r\n return_timestamps = None,\r\n task = None,\r\n is_multilingual = None,\r\n **kwargs\r\n ):\r\n # At this point we want the generation config to be initialized, otherwise we have to copy past the initialization\r\n # scheme, and it will be run again when calling super. Also modifying self.generate_config\r\n # here update self.generation_config or generation_config\r\n self.generation_config.return_timestamps = return_timestamps if return_timestamps is not None else False\r\n self.generation_config.task = task if task is not None else False\r\n self.generation_config.is_multilingual = is_multilingual if is_multilingual is not None else False\r\n\r\n if self.generation_config.forced_decoder_ids and task is not None:\r\n if self.generation_config.is_multilingual:\r\n self.generation_config.forced_decoder_ids[1][1]= generation_config.task_to_id[generation_config[\"task\"]]\r\n else:\r\n raise ValueError(\"A task or language were given but the model was trained on english and can thus only transcribe from english to english.\")\r\n if return_timestamps:\r\n logits_processor = [WhisperTimeStampLogitsProcessor(self.generation_config)]\r\n return super().generate(inputs, self.generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs)\r\n```\r\nThis is not possible because the initialisation of the config only takes place in the generate. But this also means that if a `generation_config.json` file exist (meaning someone went through the trouble of saving a generation config and pushing it to the hub` it is not used automatically either. You have to instantiate it. This is not really good as for example some arguments are only necessary for `generate()` in this case, `no_timestamps_token_id`, and cannot be set through the pipeline (the pipeline would have to support `generation_config` as well). Again you would have to do a `GenerateConfig.from_pretrained(\"...\")` which should be automatically done (it's the purpose of having a save file).\r\n\r\n4. Sorry, the warning comes from [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L1186)",
"Before going down at the individual points, I think it is worth agreeing on high-level design choices :) The goal of this refactor was to separate the two types of configurations, which were being held in the same file and class. It was also made to be retrocompatible with existing uses. \r\n\r\nBeyond these two basic points, there was a design decision to nudge users into treating these two configurations separately, so they can evolve in isolation and have minimal cross-dependencies. I've also made an intentional effort to avoid using the term `config` in `GenerationConfig`, when referring to the model config. If we agree that this is a desirable property, then some short-term pain should be endured.\r\n\r\n1. While it is simple to implement, it will only be useful in the transition phase -- in the future, no generation parameters are held in the model config. If we do implement it, `GenerationConfig.from_pretrained()` will acquire a dependency on another class and it will remove the incentive for the users to use separate files (both undesirable IMO). It will also make the name of the function a lie, as we will not be returning from a \"pre-trained Generation Config\". Side-note: the existing `GenerationConfig.from_model_config()` does this cross-class loading while keeping responsibilities isolated.\r\n2. (I'm assuming you're writing about `GenerationConfig.from_model_config()`, as there is no `GenerationConfig.from_model_config()`) Happy to expand it :) What format would you like to get here, a dict?\r\n3. It is pre-initialized [here](https://github.com/huggingface/transformers/blob/91ff7efeeb3e6bb10d83702db24108bb6583e013/src/transformers/modeling_utils.py#L1036) from the model config, for retrocompatibility. It is then overwritten [here](https://github.com/huggingface/transformers/blob/91ff7efeeb3e6bb10d83702db24108bb6583e013/src/transformers/modeling_utils.py#L2505) from the generation config file, if it exists. Hopefully, many versions from now, the initialization from the model config will be removed 🤞\r\n4. I don't get this point -- we do use a `GenerationConfig` loaded from `from_pretrained` by default 🤔 If the question is that all `.generate()` calls should be using a `GenerationConfig` initialized that way, then we go back to the question in 1. :)\r\n5. The warning only applies when the user modifies the model configuration to achieve a different generation behavior. This is actually a problem that relates to 1.: if we do load the generation config from the model config, and the user follows this pattern [change the model config], how can we ensure correctness? The generation config would no longer match the model config regarding generation parameters. My take here was to raise a warning and slowly kill this behavior, as it is making the separation of concerns impossible at the moment unless the logic you see at the start of `.generate()` is added everywhere. This is also why changes in the model config no longer work to parameterize `.generate()` if you call specific generation methods (like `greedy_search()`)\r\nE.g.\r\n```py\r\n# modify generation properties through ad hoc model config changes\r\nmodel = BartForConditionalGeneration.from_pretrained(\"hf-internal-testing/tiny-random-bart\", max_length=10)\r\n# or\r\nmodel = BartForConditionalGeneration.from_pretrained(\"hf-internal-testing/tiny-random-bart\")\r\nmodel.config.max_length = 10\r\n# both will raise that warning at generation time, since that is no longer the job of model config\r\nmodel.generate(...)\r\n\r\n# however, this will NOT raise the warning, since it's the generation config's job\r\nmodel = BartForConditionalGeneration.from_pretrained(\"hf-internal-testing/tiny-random-bart\")\r\nmodel.generation_config.max_length = 10\r\nmodel.generate(...)\r\n```\r\n\r\nTwo additional comments/points:\r\n6. You touched the point of having multiple `tasks`. That is not yet implemented, but highly desirable! At the very least, to make sure the right pipeline can load the right generation config file.\r\n7. I'm noticing that are missing the functionality to save the generation config when `model.save_pretrained()` is called, which I forgot to add 🤦 This will ensure that ALL new saved models will have a `generation_config.json` if they can call `.generate()`. EDIT: https://github.com/huggingface/transformers/pull/21264\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing as I have not more comments! Things look good for now!",
"Just to clarify the intention here...does the warning mean that we are supposed to override the model.config.do_sample parameter on the model object whenever we change temperature values in the generate keyword arguments from 0 to non-zero? (that seems to be what is needed unless I am missing something)\r\n\r\nFor example for GPT-Neo I was passing `do_sample=temperature > 0` so I don't get logitprocessor errors when we choose zero temp",
"Hey @slundberg 👋 \r\n\r\n`temperature` and `do_sample` control two different things, please refer to the [documentation](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig) :) Argument validation is yet to be added.",
"Hey! Got it. I see that do_sample controls the firing of the `sample` method vs the `greedy` method (or the beam search equivelents), but since setting temperature=0 implies greedy decoding, some common APIs (like OpenAI) automatically set sample vs greedy based on the temperature. If I expose an interface that does the same (sets do_sample based on the temperature given by the user), then I run into this warning unless I change the actual model object that was passed (which is not ideal). Not sure if this makes sense.",
"I see -- yeah, if that's the intended behavior (use greedy decoding when temperature is 0*), then it makes sense! From your past two messages, I'm assuming you are controlling it through `model.config` [`model.config.do_sample=temperature>0`], which raises the warning. Would you be able to control it through `.generate()` [`.generate(do_sample=temperature>0)`] or through `model.generation_config` [`model.generation_config.do_sample=temperature>0`]?\r\n\r\n*While they are analytically equivalent, if you apply a very small temperature you'll get `-inf` numbers. ATM it results in an exception in the sampling phase, because of the `-inf`, in the feature we will raise an exception at the argument validation phase. In `.generate()`, we'd rather be explicit with exceptions than implicit with subtle corrections like switching into greedy decoding, so that the programmer has full control over what's happening :)"
] | 1,674
| 1,693
| 1,676
|
COLLABORATOR
| null |
# Generation config
I know it has just been added so it is normal! But the following are missing (and are pretty intuitive w.r.t our other objects such as configs, processors etc):
- [ ] `GenerationConfig.from_pretrained("openai/whisper-tiny.en" )` where the path does not already have a `generation_config.json`. Currently this only looks for the `generation_config.json` but it should be possible to initialised by default using the `config` if a generation file is not present. (as it is actually done in generate)
- [ ] Similarly, the `from_config` only supports a config `object` which has to be initialized.
- [ ] The `generation_config` should be automatically initialised before the `generate` function : this is because other models will call `super().generate` after having played with extra kwargs, and these kwargs are needed for pre-processing ( adding selected logit processors etc).
- [ ] [edit] when running 'generate()` the generation config should by default be initialized `from_pretrained`, this is the whole point of saving a generation_config file IMO
- [ ] The following warning
```python
warnings.warn(
"You have modified the pretrained model configuration to control generation. This is a"
" deprecated strategy to control generation and will be removed soon, in a future version."
" Please use a generation configuration file (see"
" https://huggingface.co/docs/transformers/main_classes/text_generation)")
```
should be discussed as it is more efficient to modify the generation parameters on the fly if you are using a model that requires special arguments (like whisper), which is what `generation_config` was designed for. Indeed you might want to do `translation` but the default task on the hub is `traduction`. For a newbie, he has to go through finding `processor.set_forced_decoder_ids` while he could just do `model.generate(task = 'transcribe')`. The same goes for the `return_timestamp` which uses the same model so no need for a new config/ new model on the hub.
More generally, I think that if we have models that use prompts and special initial tokens, this is very useful.
cc @sgugger , @gante, @patrickvonplaten and @LysandreJik
This arises in the refactoring of whisper to have somewhat of a 1-1 with open ai, where you just load the model and can do `generate()`. It also simplifies the pipeline for asr's whisper specific parts
ps: I might be completely wrong about this! Feel free to give me feedbacks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21220/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21219
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21219/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21219/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21219/events
|
https://github.com/huggingface/transformers/pull/21219
| 1,551,180,409
|
PR_kwDOCUB6oc5IOxEf
| 21,219
|
Microphone live inference catching up when inference is too slow (whisper).
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
When using relatively slow inference models (like whisper, especially the large variants
on moderate hardware). The live inference snippets would be so slow that it would
feel extremely laggy.
This PR fixes it by adding an estimate of what is real time, and simply skip inferences
when we're too far behind.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21219/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21219",
"html_url": "https://github.com/huggingface/transformers/pull/21219",
"diff_url": "https://github.com/huggingface/transformers/pull/21219.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21219.patch",
"merged_at": 1674246824000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21218
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21218/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21218/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21218/events
|
https://github.com/huggingface/transformers/pull/21218
| 1,551,166,656
|
PR_kwDOCUB6oc5IOuFi
| 21,218
|
Replace reduce_labels with do_reduce_labels
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
The `reduce_labels` flags for most of the image processors was deprecated for`do_reduce_labels`. This was to keep consistent with the `do_xxx` pattern with other flags.
This PR deprecates the flag for any models that still used `reduce_labels`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21218/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21218",
"html_url": "https://github.com/huggingface/transformers/pull/21218",
"diff_url": "https://github.com/huggingface/transformers/pull/21218.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21218.patch",
"merged_at": 1674494493000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21217
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21217/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21217/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21217/events
|
https://github.com/huggingface/transformers/pull/21217
| 1,551,149,824
|
PR_kwDOCUB6oc5IOqYD
| 21,217
|
[`BLIP`] fix doctest
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes `BLIP` doctest.
Link to failing job: https://github.com/huggingface/transformers/actions/runs/3964164193
The docstring of the `forward` method of `BlipForQuestionAnswering` has been corrected to educate users on how to correctly use this module after https://github.com/huggingface/transformers/pull/21021 being merged.
The logic now of the `forward` method is pretty much similar to [`T5`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L1592-L1610).
cc @ydshieh 💯
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21217/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21217",
"html_url": "https://github.com/huggingface/transformers/pull/21217",
"diff_url": "https://github.com/huggingface/transformers/pull/21217.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21217.patch",
"merged_at": 1674468984000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21216
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21216/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21216/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21216/events
|
https://github.com/huggingface/transformers/pull/21216
| 1,551,141,512
|
PR_kwDOCUB6oc5IOojv
| 21,216
|
Skip `test_multi_gpu_data_parallel_forward` for `UperNetModelTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Wait, are you saying that models that using add_module can't leverage multi-GPU training?",
"> Wait, are you saying that models that using add_module can't leverage multi-GPU training?\r\n\r\n\r\n(well, just not with the way we used in `test_multi_gpu_data_parallel_forward`, which uses `nn.DataParallel`)\r\n(PyTorch says `It is recommended to use [DistributedDataParallel] instead)\r\n\r\n**Partially** , that's my observation while debugging `Maskformer`, `BEIT`, and then with `LayoutLMV2`, `Data2VecVision` etc.\r\n\r\n```\r\n @unittest.skip(\r\n reason=\"Data2VecVision has some layers using `add_module` which doesn't work well with `nn.DataParallel`\"\r\n )\r\n```\r\n\r\nsee https://github.com/huggingface/transformers/pull/17864\r\n\r\n",
"@NielsRogge Let me know if you have further question :-) before giving 👍✅ . Thank you 🙏 "
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Same as `BEIT`, this model uses some `add_module` and we need to skip this test.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21216/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21216",
"html_url": "https://github.com/huggingface/transformers/pull/21216",
"diff_url": "https://github.com/huggingface/transformers/pull/21216.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21216.patch",
"merged_at": 1674553277000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21215
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21215/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21215/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21215/events
|
https://github.com/huggingface/transformers/pull/21215
| 1,551,116,283
|
PR_kwDOCUB6oc5IOjLP
| 21,215
|
Fix OneFormer Docstrings
|
{
"login": "praeclarumjj3",
"id": 54928629,
"node_id": "MDQ6VXNlcjU0OTI4NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/54928629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/praeclarumjj3",
"html_url": "https://github.com/praeclarumjj3",
"followers_url": "https://api.github.com/users/praeclarumjj3/followers",
"following_url": "https://api.github.com/users/praeclarumjj3/following{/other_user}",
"gists_url": "https://api.github.com/users/praeclarumjj3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/praeclarumjj3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/praeclarumjj3/subscriptions",
"organizations_url": "https://api.github.com/users/praeclarumjj3/orgs",
"repos_url": "https://api.github.com/users/praeclarumjj3/repos",
"events_url": "https://api.github.com/users/praeclarumjj3/events{/privacy}",
"received_events_url": "https://api.github.com/users/praeclarumjj3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes docstrings for OneFormer.
- [x] Checked that all the doctests passed for the following command:
```bash
python3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules src/transformers/models/oneformer/ -sv --doctest-continue-on-failure --doctest-glob="*.mdx"
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21215/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21215",
"html_url": "https://github.com/huggingface/transformers/pull/21215",
"diff_url": "https://github.com/huggingface/transformers/pull/21215.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21215.patch",
"merged_at": 1674232632000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21214
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21214/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21214/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21214/events
|
https://github.com/huggingface/transformers/issues/21214
| 1,551,004,082
|
I_kwDOCUB6oc5ccnGy
| 21,214
|
Support for OrderedConstraints, TemplateConstraints and LiteralConstraints in force_words_ids
|
{
"login": "ruanchaves",
"id": 14352388,
"node_id": "MDQ6VXNlcjE0MzUyMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruanchaves",
"html_url": "https://github.com/ruanchaves",
"followers_url": "https://api.github.com/users/ruanchaves/followers",
"following_url": "https://api.github.com/users/ruanchaves/following{/other_user}",
"gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions",
"organizations_url": "https://api.github.com/users/ruanchaves/orgs",
"repos_url": "https://api.github.com/users/ruanchaves/repos",
"events_url": "https://api.github.com/users/ruanchaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruanchaves/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @gante",
"Hi @ruanchaves 👋 \r\n\r\nI'm not sure whether I understand the issue you described above. Our generation methods return the sequence log probabilities, from which you can compute the sequence perplexity. What would be missing for your use case?\r\n\r\nRegarding `force_words_ids`, I'm reluctant to add more features there -- it has low usage and a high maintenance cost. I might reconsider my position here if I see more demand for further functionality :)",
"Olá @gante !\r\n\r\n> I'm not sure whether I understand the issue you described above. Our generation methods return the sequence log probabilities, from which you can compute the sequence perplexity. \r\n\r\nTrue, but I want the sequence log probabilities for a predefined sequence. I already have a sequence of tokens and I want the model to calculate its perplexity. I don't want the perplexity of a sequence generated through beamsearch or greedy search. \r\n\r\nWhen [lm_scorer](https://github.com/simonepri/lm-scorer) was conceived, there was no straightforward way to do this with `transformers`:\r\n\r\n```python\r\n# Return token probabilities (provide log=True to return log probabilities)\r\nscorer.tokens_score(\"I like this package.\")\r\n# => (scores, ids, tokens)\r\n# scores = [0.018321, 0.0066431, 0.080633, 0.00060745, 0.27772, 0.0036381]\r\n# ids = [40, 588, 428, 5301, 13, 50256]\r\n# tokens = [\"I\", \"Ġlike\", \"Ġthis\", \"Ġpackage\", \".\", \"<|endoftext|>\"]\r\n```\r\n\r\nIs this still the case? I hope you can point me in the right direction if new features were added since [lm_scorer](https://github.com/simonepri/lm-scorer) was released.\r\n\r\n> Regarding `force_words_ids`, I'm reluctant to add more features there -- it has low usage and a high maintenance cost. I might reconsider my position here if I see more demand for further functionality :)\r\n\r\nI get it, but being able to calculate the perplexity of a predefined sequence sounds like an essential feature to me, regardless of where it is implemented.",
"Hey @ruanchaves 👋 \r\n\r\nYeah, we lack an easy interface to compute the logits of existing sentences, and that's something I really like to add ASAP! I'm planning to add it within the next month, but if you'd like to give me a hand you'd be more than welcome 🙌 \r\n\r\nThe planned interface is\r\n```python\r\nlog_scores = model.compute_token_scores(tokens, normalize_logits)\r\n```\r\nwhere `tokens` is the tokenized input (so it can be used in different modalities) and `normalize_logits` is an optional boolean (defaulting to true) to control whether we want to renormalize the model logits",
"@gante , \r\n\r\n> Yeah, we lack an easy interface to compute the logits of existing sentences, and that's something I really like to add ASAP! I'm planning to add it within the next month, but if you'd like to give me a hand you'd be more than welcome 🙌\r\n\r\nGood! This would close the issue for me, as it's the thing I'm actually looking for. I'll be watching your PRs and see if I can contribute somehow. \r\n\r\nSuggestion: consider adding the `compute_token_scores` method to masked language models as well. This has been implemented a few years ago at [awslabs/mlm-scoring](https://github.com/awslabs/mlm-scoring), but just like lm-scorer, it's no longer maintained. ",
"Have there been updates on the implementations of `OrderedConstraints` and `TemplateConstraints`? I find myself needing both.",
"Hi @Ayenem 👋 \r\n\r\nNo developments, our team is out of bandwidth to expand `Constraints` at the moment :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,707
| 1,707
|
CONTRIBUTOR
| null |
### Feature request
As raised by @sijunhe in [this blog post](https://huggingface.co/blog/constrained-beam-search), the `force_words_ids` argument of the `model.generate()` method needs to be modified to support `OrderedConstraints` and `TemplateConstraints`.
In addition, there is a need for a `LiteralConstraints` subclass. This would enable generating exactly the same list of tokens given in the `force_words_ids` argument, which would in turn allow for the calculation of sentence perplexity across all language models in the library by making use of [the attribute implemented in this PR](https://github.com/huggingface/transformers/pull/14654).
### Motivation
Currently, there is no standard way of calculating sentence perplexity and implementing it requires a lot of boilerplate code, which may not always work as intended. Third-party libraries such as [lm-scorer](https://github.com/simonepri/lm-scorer), which implemented this functionality, are no longer maintained and do not support all language models in the library.
### Your contribution
I would be interested in working on this PR as I'm the maintainer of a third-party library ( [hashformers](https://github.com/ruanchaves/hashformers) ) that performs sentence perplexity calculations with the Transformers library.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21214/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21213
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21213/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21213/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21213/events
|
https://github.com/huggingface/transformers/pull/21213
| 1,550,922,808
|
PR_kwDOCUB6oc5IN58K
| 21,213
|
Fix GPTJ doctest
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
In #21178, the checkpoint used for `GPTJForSequenceClassification` is changed back to `_CHECKPOINT_FOR_DOC`, which is `hf-internal-testing/tiny-random-gptj`. That checkpoint has `self.score` with shape `[2, 512]`, but the model has `self.score` with shape `[2, 32]` as `config.n_embd=32`. The checkpoint has `512` which came from a mistake of using `n_ctx` previously, and that error is fixed in #14190, see [here](https://github.com/huggingface/transformers/commit/ce91bf9a3431b4d260005de84c0b0fa394409a3c#diff-61155574bf9c9669ccdfdf7dd508a5979b4e4915cc95f7ff4a63fee05a0e2715).
The PR uses another tiny checkpoint to pass the test.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21213/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21213",
"html_url": "https://github.com/huggingface/transformers/pull/21213",
"diff_url": "https://github.com/huggingface/transformers/pull/21213.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21213.patch",
"merged_at": 1674225301000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21212
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21212/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21212/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21212/events
|
https://github.com/huggingface/transformers/pull/21212
| 1,550,683,891
|
PR_kwDOCUB6oc5INGgg
| 21,212
|
Update `huggingface_hub` version
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Update `huggingface_hub` version to `0.12.0rc0` and necessary changes with this version.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21212/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21212",
"html_url": "https://github.com/huggingface/transformers/pull/21212",
"diff_url": "https://github.com/huggingface/transformers/pull/21212.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21212.patch",
"merged_at": 1674224160000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21211
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21211/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21211/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21211/events
|
https://github.com/huggingface/transformers/issues/21211
| 1,550,628,374
|
I_kwDOCUB6oc5cbLYW
| 21,211
|
Mask-fill pipeline for t5 and flan-t5
|
{
"login": "Bachstelze",
"id": 19904888,
"node_id": "MDQ6VXNlcjE5OTA0ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19904888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bachstelze",
"html_url": "https://github.com/Bachstelze",
"followers_url": "https://api.github.com/users/Bachstelze/followers",
"following_url": "https://api.github.com/users/Bachstelze/following{/other_user}",
"gists_url": "https://api.github.com/users/Bachstelze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bachstelze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bachstelze/subscriptions",
"organizations_url": "https://api.github.com/users/Bachstelze/orgs",
"repos_url": "https://api.github.com/users/Bachstelze/repos",
"events_url": "https://api.github.com/users/Bachstelze/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bachstelze/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2197722692,
"node_id": "MDU6TGFiZWwyMTk3NzIyNjky",
"url": "https://api.github.com/repos/huggingface/transformers/labels/t5",
"name": "t5",
"color": "509fc4",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @ArthurZucker, can I work on this issue? Thank you!",
"Sure! Awesome that you want to take this on! Feel free to open a PR and ping me if you need any pointers",
"@ArthurZucker I have several questions:\r\n1. Is there any slack channel/discord where we can discuss the details of the issue? \r\n2. About the scope of the issue, we have one workaround for a single mask. There are also requests for multi masks in one sentence and also the probability distribution over the targets. Do we focus on the single mask case first in the initial MR? If so, is our plan to integrate the current workaround into `FillMaskPipeline`?\r\n3. I checked the `FillMaskPipeline` class and `run_single` method. I feel a little confused where's the best place to add the logic. I would appreciate it if you could point out some starting points!\r\n\r\nThank you for your help! ",
"Hey! After digging a little bit, I am not sure that we actually need to do this PR. But let me answer to your questions and explain why. \r\n1. I think you can ping us on the Hugging Face discord, but the best medium would be a PR on github or this issue 😉 \r\n2. Let's drop the potential addition. Instead of using the pipeline `FillMask`, which is specifically for models trained with a MaskedLMHead, you can use the following script : \r\n```python \r\nfrom transformers import AutoTokenizer, T5ForConditionalGeneration\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"t5-base\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-base\")\r\ninput_text = \"A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3> .\"\r\ninput_ids = tokenizer(input_text, return_tensors=\"pt\").input_ids\r\noutputs = model.generate(input_ids)\r\nprint(tokenizer.decode(outputs[0]))\r\n<pad><extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.\r\n```\r\nThis is called `text2text-generation` and should work with the pipeline.\r\n```python \r\ntext2text_generator = pipeline(\"text2text-generation\", model = \"t5-base\")\r\ntext2text_generator(input_text)\r\n[{'generated_text': 'man beer a salt.'}]\r\n```\r\nIn order to get the scores, you should be using `generate()`. \r\n",
"Does that fit in the use case that you want? ",
"@ArthurZucker Hi, if i want to fill multiple words (specific number is unknown), \r\n\r\nfor example\r\n\r\n`He <mask> now -> He is happy now`\r\n\r\nWould this be possible?",
"No, I don't think this can be possible with a single mask. As you can see in the detail about the [task](https://huggingface.co/tasks/fill-mask). \r\n\r\nClosing this as the issue is solved 😉 @anruijian ping me and re-open if you feel like it did not solve your issue ",
"@Leolty\r\nIt could be possible that the model generates multiple words if it was pretrained with longer masked spans like in [UL2 mixture of denoisers](https://ai.googleblog.com/2022/10/ul2-20b-open-source-unified-language.html). Sometimes the t5 models already generate multiple words (and predictions) for one mask. With the input text ```India is a <extra_id_0> of the world.' ``` into t5-base it generates ```<pad><extra_id_0> part<extra_id_1> developing part<extra_id_2> part of the rest<extra_id_3> part<extra_id_4> part of the world.<extra_id_5>```.\r\n\r\n@anruijian\r\nAre you still interested in this issue?\r\n\r\nI wrote this function to get the scores of target words:\r\n```\r\ndef get_target_scores(text, targets, t5_tokenizer, t5_model):\r\n \"\"\"\r\n A wrapper function for a mask fill-in with target words for (flan-)t5\r\n Parameters:\r\n text(String): The input text with <extra_id_0> as mask\r\n targets(list): A list with target words\r\n t5_tokenizer(T5Tokenizer): The loaded tokenizer\r\n t5_model(T5ForConditionalGeneration): The loaded t5 model\r\n \"\"\"\r\n target_numbers = len(targets)\r\n constrain_ids_list = []\r\n\r\n # encode the target words\r\n for target in targets:\r\n encoded_target_ids = t5_tokenizer(target, add_special_tokens=False).input_ids\r\n constrain_ids_list.append(encoded_target_ids)\r\n\r\n # encode the input text\r\n encoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')\r\n input_ids = encoded['input_ids'].to(DEVICE)\r\n\r\n # generate the outputs with the target as constrains\r\n outputs = t5_model.generate(input_ids=input_ids,\r\n force_words_ids=[constrain_ids_list],\r\n num_beams=target_numbers+5, num_return_sequences=target_numbers+5,\r\n return_dict_in_generate=True,\r\n output_scores=True,\r\n max_length=2)\r\n \r\n # calculate the mask position\r\n _0_index = text.index('<extra_id_0>')\r\n _result_prefix = text[:_0_index]\r\n _result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>\r\n\r\n result_dict = {}\r\n # filter each output and save it into the result dictionary\r\n for output_number, output in enumerate(outputs[\"sequences\"]):\r\n _txt = t5_tokenizer.decode(output[1:], skip_special_tokens=False, clean_up_tokenization_spaces=False)\r\n\r\n if _txt in targets:\r\n # save the target score\r\n result_dict[_txt] = outputs[\"sequences_scores\"][output_number]\r\n # complete text\r\n print(_result_prefix + _txt + _result_suffix)\r\n\r\n # return the aggregated result\r\n return result_dict\r\n\r\n# test the function with this input text\r\ntext = 'India is a <extra_id_0> of the world.'\r\nscores = get_target_scores(text, [\"part\", \"state\", \"country\", \"democracy\"], t5_tokenizer, t5_model)\r\nprint(scores)\r\n```\r\nI suggest that we reopen this issue and wrap such functions in the huggingface (fill-mask-)pipeline.\r\n@ArthurZucker\r\nIs the fill-mask-pipeline only for models with a MaskedLMHead?\r\nWe should find a way to integrate similar models. There are probably coming more such models, considering the improvement with the mixture of denoisers.",
"Interesting. I don't think I am against adding this, but will ping @Narsil to see what he thinks. \r\nIMO: \r\n- Pros: other models can also benefit from this. T5 is one of the most used, but flan T5 is also on fire! \r\n- Cons: not really equivalent to mask fill pipeline? Would break the fact that it is normally only for models with the `MaskedLMHead`",
"I think it fills `fill-mask` quite nicely, in the sense the given a masked input, the model should tell us what should be under mask.\r\n\r\nNow potential caveats/pains:\r\n\r\n- Currently each mask return a single token, where the id, is returned, that wouldn't be possible with multiple items, potential Breaking change needed here (or mostly likely painful legacy code to maintain since we're unlikely to break here).\r\n- Currently, if there are multiple masks, we return each potential mask locations potential tokens independantly. Not sure how t5/flan-t5 work here.\r\n- There is an argument `top_k` which is quite necessary in a lot of situations for `fill-mask`, how that would work on generative ? (Would it get translated to beam-search maybe ?)\r\n- Looking back at the example, it seems that you are suggesting a filling to the model in the decoder prompt, is that correct ? There is the `targets` parameters that might do something similar for bert-like approaches. Not sure how much they really overlap. (Can you have multiple various prompts, and find the most likely?)\r\n- Also does the generative work without any prompt ?\r\n\r\nOverall I'm all in favor of adding more complex (hopefully **better**) ways to fill mask, but I anticipate quite some pain in the actual implementation, dealing what's already there and making the overall experience similar enough.",
"Also this task is called `Corrupting Spans` in the original T5 paper no? \r\n\r\n\r\n",
"I am not sure if this is the right place to ask this, but....I understand that text2text-generation pipeline can be used to achieve kind of MLM objective. But what if i want to train T5 MLM kind of objective on my own data ? Anyone can point me to any resources?",
"These kind of question should be asked on the [`forum`](https://discuss.huggingface.co/). \r\nAlso find the attached snippet that shows how you can fill in with multiple words. \r\n```python \r\nfrom transformers import T5ForConditionalGeneration, AutoTokenizer\r\nimport torch\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"t5-base\", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to(\"cuda\") \r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-base\")\r\n\r\ninput_string = \"Mr. Dursley was the director of a firm called <extra_id_0>, which made <extra_id_1>. He was a big, solid man with a bald head. Mrs. Dursley was thin and <extra_id_2> of neck, which came in very useful as she spent so much of her time <extra_id_3>. The Dursleys had a small son called Dudley and <extra_id_4>\" \r\n\r\nmodel.cuda()\r\ninputs = tokenizer(input_string, return_tensors=\"pt\", add_special_tokens=False).input_ids.to(\"cuda\")\r\n\r\noutputs = model.generate(inputs, max_length=200)\r\n\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\n```\r\n<pad><extra_id_0> Dursley<extra_id_1> a fortune<extra_id_2> had a long kind<extra_id_3> in<extra_id_4> a daughter named Mary<extra_id_5> Dursley<extra_id_6> with a kind<extra_id_7> in<extra_id_8> in<extra_id_9> a daughter named Mary<extra_id_10> Dursley<extra_id_11> Dursley<extra_id_12> a fortune<extra_id_13> Dursley<extra_id_14> had a short piece<extra_id_15> in<extra_id_16> Dursley<extra_id_17> a fortune<extra_id_18> Dursley<extra_id_19> a fortune<extra_id_20> in Dursley<extra_id_21> a daughter named Mary<extra_id_22> had a long, thick piece<extra_id_23> had a long piece<extra_id_24> with a short piece<extra_id_25> a daughter named<extra_id_26> named<extra_id_27> </s>\r\n```\r\n\r\n"
] | 1,674
| 1,677
| 1,675
|
NONE
| null |
### Feature request
So far it isn't possible to use t5-models with the standard mask-fill-pipeline and everyone is building their own custom workaround.
### Motivation
It would save work and reduce complexity if this function is integrated.
### Your contribution
There is already a workaround: https://github.com/huggingface/transformers/issues/3985
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21211/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21210
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21210/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21210/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21210/events
|
https://github.com/huggingface/transformers/pull/21210
| 1,550,620,109
|
PR_kwDOCUB6oc5IM4tL
| 21,210
|
Declare __len__ method in PreTrainedTokenizerBase
|
{
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
When type hinting a tokenizer with `PreTrainedTokenizerBase`, it doesn't know supprt `len(tokenizer)`, but both slow and fast version implement `__len__` so we declare it at least to make the type hints happy, we could make this an `abstract_method` as well if needed
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21210/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21210",
"html_url": "https://github.com/huggingface/transformers/pull/21210",
"diff_url": "https://github.com/huggingface/transformers/pull/21210.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21210.patch",
"merged_at": 1674226474000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21209
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21209/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21209/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21209/events
|
https://github.com/huggingface/transformers/pull/21209
| 1,550,578,312
|
PR_kwDOCUB6oc5IMvun
| 21,209
|
Encode object type in Donut tokens
|
{
"login": "ts2095",
"id": 24903193,
"node_id": "MDQ6VXNlcjI0OTAzMTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/24903193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ts2095",
"html_url": "https://github.com/ts2095",
"followers_url": "https://api.github.com/users/ts2095/followers",
"following_url": "https://api.github.com/users/ts2095/following{/other_user}",
"gists_url": "https://api.github.com/users/ts2095/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ts2095/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ts2095/subscriptions",
"organizations_url": "https://api.github.com/users/ts2095/orgs",
"repos_url": "https://api.github.com/users/ts2095/repos",
"events_url": "https://api.github.com/users/ts2095/events{/privacy}",
"received_events_url": "https://api.github.com/users/ts2095/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Not sure why `black` is failing. `make fixup` doesn't change anything for me.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21209). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @ts2095 , thanks for your contribution and sorry for the late reply.\r\n\r\nCould you rebase your branch on main to make the CI green? Also, can you confirm this update is 100% backwards compatible?",
"Hi @ts2095 , sorry for the late reply here!\r\n\r\nWould you be able to rebase your branch on the main branch of Transformers?\r\n\r\ncc'ing @amyeroberts here for a review",
"@ts2095 There was a [recent update](https://github.com/huggingface/transformers/pull/22204) on main, updating our CI images to run on Python 3.8, which I believe should resolve the import issue with `from typing import Literal`. Could you rebase to include these? ",
"@amyeroberts We still support 3.7 so we cannot accept type-hints using Literal.",
"@ts2095 Can you confirm that this is backwards compatible and that previous token sequences result in the same json output? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,682
| 1,682
|
NONE
| null |
# What does this PR do?
This makes use of encoded object types in text generated by Donut. It fixes a few issues:
- keys of the same name appearing at different levels of the JSON are no longer confused
- no more ambiguity between a dict and a list of length 1 containing a dict
Additionally, this allows us to keep track of which keys have been opened and closed so far.
Now we can look ahead to find the token that closes the current element. This allows for much
deeper nesting (beyond just 2 levels) without breaking.
There is some fault tolerance included in the look-ahead. If a closing token cannot be found or a new opening token is encountered unexpectedly, ambiguous parts of the text will be discarded and processing continues with the next part of the text that can be converted to JSON without any ambiguity.
This requires matching changes in `json2token`. I wasn't quite sure where to put this. I think at the moment, the `Dataset` code containing that method is only part of the tutorials. Would it make sense to add it here as well? Essentially, all that's needed is something like
```python
from abc import ABCMeta, abstractmethod
class DonutDatasetMixin(ABCMeta):
added_tokens: list
@abstractmethod
def add_tokens(self, list_of_tokens: t.List[str]):
pass
def json2token(
self,
obj: t.Any,
update_special_tokens_for_json_key: bool = True,
sort_json_key: bool = True,
):
"""
Convert an ordered JSON object recursively into a token sequence
Args:
obj: Object to convert
update_special_tokens_for_json_key (bool):
Add encountered keys as special tokens to the processor's tokenizer
sort_json_key (bool): Whether to sort JSON keys in an object alphabetically
"""
if (obj_type := self.get_object_type(obj)) == "dict":
if len(obj) == 1 and "text_sequence" in obj:
return obj["text_sequence"]
else:
output = ""
if sort_json_key:
keys = sorted(obj.keys(), reverse=True)
else:
keys = obj.keys()
for k in keys:
v = obj[k]
v_obj_type = self.get_object_type(v)
if update_special_tokens_for_json_key:
self.add_tokens([rf"<s_{k}-{v_obj_type}>", rf"</s_{k}-{v_obj_type}>"])
output += (
rf"<s_{k}-{v_obj_type}>"
+ self.json2token(obj[k], update_special_tokens_for_json_key, sort_json_key)
+ rf"</s_{k}-{v_obj_type}>"
)
return output
elif obj_type == "list":
return r"<sep/>".join(
[
self.json2token(item, update_special_tokens_for_json_key, sort_json_key)
for item in obj
]
)
else:
obj = str(obj)
if f"<{obj}/>" in self.added_tokens:
obj = f"<{obj}/>" # for categorical special tokens`
return obj
@staticmethod
def get_object_type(obj: t.Any) -> t.Literal["list", "dict", "str"]:
if isinstance(obj, (list, np.ndarray)):
return "list"
if isinstance(obj, dict):
return "dict"
return "str"
```
Then the dataset can be constructed similarly to how it's already done in the tutorial:
```python
class DonutDataset(Dataset, DonutDatasetMixin):
pass
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@NielsRogge As promised, the improvements that we made to Donut's `token2json`. It works well with more complex JSON data structures, as demonstrated in the added tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21209/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21209",
"html_url": "https://github.com/huggingface/transformers/pull/21209",
"diff_url": "https://github.com/huggingface/transformers/pull/21209.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21209.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21208
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21208/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21208/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21208/events
|
https://github.com/huggingface/transformers/issues/21208
| 1,550,490,060
|
I_kwDOCUB6oc5capnM
| 21,208
|
UL2 Mixture-of-Denoiser loss
|
{
"login": "gaceladri",
"id": 7850682,
"node_id": "MDQ6VXNlcjc4NTA2ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7850682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaceladri",
"html_url": "https://github.com/gaceladri",
"followers_url": "https://api.github.com/users/gaceladri/followers",
"following_url": "https://api.github.com/users/gaceladri/following{/other_user}",
"gists_url": "https://api.github.com/users/gaceladri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gaceladri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaceladri/subscriptions",
"organizations_url": "https://api.github.com/users/gaceladri/orgs",
"repos_url": "https://api.github.com/users/gaceladri/repos",
"events_url": "https://api.github.com/users/gaceladri/events{/privacy}",
"received_events_url": "https://api.github.com/users/gaceladri/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
NONE
| null |
### Feature request
The losses applied to the paper **UL2: Unifying Language Learning Paradigms**
The Mixture-of-Denoisers losses are described in the UL2 paper, which can be found at the following link: https://arxiv.org/abs/2205.05131
The code is based on T5x (which is JAX/FLAX): https://github.com/google-research/t5x
### Motivation
I am requesting the addition of new losses applied in the UL2 paper called Mixture-of-Denoisers. These new losses have been shown to improve the performance of unsupervised learning models and I believe they could benefit the HuggingFace community.
### Your contribution
Opening the request
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21208/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21207
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21207/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21207/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21207/events
|
https://github.com/huggingface/transformers/pull/21207
| 1,550,345,520
|
PR_kwDOCUB6oc5IL-jy
| 21,207
|
Fix `CONFIG_ARCHIVE_MAP_MAPPING_NAMES`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I will merge as it is - don't think it matters (at least not at this moment), and eventually this is going to be deprecated."
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Fix `CONFIG_ARCHIVE_MAP_MAPPING_NAMES` as reported in #21204.
Also, `UPERNET_PRETRAINED_CONFIG_ARCHIVE_MAP` doesn't exist.
Remark: we have planned deprecation
```
warnings.warn(
"ALL_PRETRAINED_CONFIG_ARCHIVE_MAP is deprecated and will be removed in v5 of Transformers. "
"It does not contain all available model checkpoints, far from it. Checkout hf.co/models for that.",
FutureWarning,
)
```
Fix #21204
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21207/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21207",
"html_url": "https://github.com/huggingface/transformers/pull/21207",
"diff_url": "https://github.com/huggingface/transformers/pull/21207.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21207.patch",
"merged_at": 1674224530000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21206
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21206/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21206/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21206/events
|
https://github.com/huggingface/transformers/issues/21206
| 1,550,322,342
|
I_kwDOCUB6oc5caAqm
| 21,206
|
OwlVit gives different results compared to original colab version
|
{
"login": "darwinharianto",
"id": 44696192,
"node_id": "MDQ6VXNlcjQ0Njk2MTky",
"avatar_url": "https://avatars.githubusercontent.com/u/44696192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/darwinharianto",
"html_url": "https://github.com/darwinharianto",
"followers_url": "https://api.github.com/users/darwinharianto/followers",
"following_url": "https://api.github.com/users/darwinharianto/following{/other_user}",
"gists_url": "https://api.github.com/users/darwinharianto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/darwinharianto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/darwinharianto/subscriptions",
"organizations_url": "https://api.github.com/users/darwinharianto/orgs",
"repos_url": "https://api.github.com/users/darwinharianto/repos",
"events_url": "https://api.github.com/users/darwinharianto/events{/privacy}",
"received_events_url": "https://api.github.com/users/darwinharianto/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Yes we had a hard time making the Space output the same bounding boxes as in Colab (eventually it worked on the cats image). It had to do with the Pillow version.\r\n\r\nSo I'm guessing there might be a difference in Pillow versions here as well\r\n\r\nCc @alaradirik ",
"Do you mean Pillow changes the input value?\r\nI tried another image\r\n\r\nspace model cant detect cat inside this image, but colab version can detect it\r\n\r\n\r\n\r\n",
"@darwinharianto thanks for bringing the issue up, I'm looking into it!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Kindly bumping",
"Kindly reminder",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @alaradirik and @amyeroberts ",
"I got the same issues. \r\nThis is original repo results.\r\n\r\n\r\n\r\nAnd this is [huggingface demo](https://huggingface.co/spaces/Jiayi-Pan/OWL-ViT).\r\n```\r\ntext_queries = text_queries.split(\",\")\r\ntarget_sizes = torch.Tensor([img.shape[:2]])\r\ninputs = processor(text=text_queries, images=img, return_tensors=\"pt\").to(device) \r\nwith torch.no_grad():\r\n outputs = model(**inputs)\r\n\r\noutputs.logits = outputs.logits.cpu()\r\noutputs.pred_boxes = outputs.pred_boxes.cpu()\r\nresults = processor.post_process(outputs=outputs, target_sizes=target_sizes)\r\n```\r\n\r\n<img width=\"1036\" alt=\"image\" src=\"https://user-images.githubusercontent.com/27891090/233775093-ce8aee88-b0a0-4d81-b917-ab3136c5388d.png\">\r\n\r\nThe `rocket` bounding box score is different. (0.15 vs more than 0.21)\r\n\r\nWith lvis-api, the performance is not reproduced. (mAP = 0.095)",
"It seems problem still exist. I mentioned about problem here. \r\n\r\nhttps://github.com/huggingface/transformers/pull/23157#issuecomment-1540056705\r\n\r\nMaybe the best way is to cover with model predictions end-to-end tests on batch of images. This approach help us to be sure about changes\r\n",
"@MaslikovEgor I agree with you. I have end-to-end test with lvis-api (both huggingface owlvit and google/scenic owl-vit). But owl vit in huggingface is not reproduced. (mAP = 0.095)\r\n\r\n- [baseline](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit): mAp 0.193 \r\n",
"I want to fix this problem, but it would be efficient if I knew where to start. Can you give me a suggestion? @alaradirik ",
"Hi @MaslikovEgor,\r\n\r\nThe demo didn't work before this fix as well (see https://github.com/huggingface/transformers/pull/20136). Try running coco evaluation with image conditioning before/after this fix, mAP@0.5 increases from 6 to 37. This is still below the expected 44, but closer to the reported/expected performance. I am still trying to figure out why.\r\nBest,\r\nOrr",
"@RRoundTable, the issues you are reporting seem to do with the text-conditioned evaluation. This means that the issues probably stem from the forward pass/post-processing. \r\n\r\nIn your LVIS eval, did you make sure to implement a new post-processor that incorporates all the changes needed for eval? If helpful, I can add my function to 'processor' or something, please notice there are a few changes compared with normal inference.",
"@orrzohar, Yes. I tested with text-conditioned evaluation.\r\n\r\nIn my LVIS eval, I just used huggingface's postprocessor and preprocessor. It would be helpful if you contribute some functions.\r\n\r\n```\r\ntransformers[torch] == 4.28.1\r\n```\r\n\r\n```\r\n# example script\r\nimport requests\r\nfrom PIL import Image\r\nimport torch\r\nimport glob\r\nimport os\r\nimport argparse\r\nimport json\r\nfrom tqdm import tqdm\r\n\r\nfrom transformers import OwlViTProcessor, OwlViTForObjectDetection\r\n\r\nparser = argparse.ArgumentParser()\r\nparser.add_argument(\"--dataset-path\", type=str, required=True)\r\nparser.add_argument(\"--text-query-path\", type=str required=True)\r\nparser.add_argument(\"--save-path\", default=\"owl-vit-result.json\", type=str)\r\nparser.add_argument(\"--batch-size\", default=64, type=int)\r\nargs = parser.parse_args()\r\n\r\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\r\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\ndevice = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\r\nmodel.to(device)\r\n\r\n\r\nwith open(args.text_query_path, \"r\") as f:\r\n text_query = f.read()\r\n\r\nimages = glob.glob(os.path.join(args.dataset_path, \"*\"))\r\nimage_ids = [img_path.split(\"/\")[-1].split(\".\")[0] for img_path in images]\r\n\r\ninstances = []\r\nN = len(images)\r\n\r\nwith torch.no_grad():\r\n for i in tqdm(range(N // args.batch_size + 1)):\r\n image_ids = []\r\n batch_images = []\r\n target_sizes = []\r\n for img_path in images[i * args.batch_size: (i+1) * args.batch_size]:\r\n image_ids.append(int(img_path.split(\"/\")[-1].split(\".\")[0]))\r\n image = Image.open(img_path).convert(\"RGB\")\r\n batch_images.append(image)\r\n target_sizes.append((image.size[1], image.size[0]))\r\n target_sizes = torch.Tensor(target_sizes)\r\n target_sizes = target_sizes.to(device)\r\n texts = [text_query.split(\",\")] * len(batch_images)\r\n inputs = processor(text=texts, images=batch_images, return_tensors=\"pt\")\r\n inputs = inputs.to(device)\r\n outputs = model(**inputs)\r\n # Target image sizes (height, width) to rescale box predictions [batch_size, 2]\r\n\r\n # Convert outputs (bounding boxes and class logits) to COCO API\r\n results = processor.post_process(outputs=outputs, target_sizes=target_sizes)\r\n for image_id, res in zip(image_ids, results):\r\n for bbox, score, label in zip(res[\"boxes\"], res[\"scores\"], res[\"labels\"]):\r\n # tensor to numpy\r\n bbox = bbox.cpu().detach().numpy()\r\n score = score.cpu().detach().numpy()\r\n label = label.cpu().detach().numpy()\r\n # bbox format: xyxy -> xywh\r\n x1, y1, x2, y2 = bbox\r\n bbox = [int(x1), int(y1), int(x2-x1), int(y2-y1)]\r\n instance = {}\r\n instance[\"image_id\"] = image_id\r\n instance[\"bbox\"] = bbox # TODO\r\n instance[\"score\"] = float(score)\r\n instance[\"category_id\"] = int(label) + 1 # TODO\r\n instances.append(instance)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @RRoundTable , \r\n\r\nI added a PR with the appropriate evaluation protocol\r\n\r\nhttps://github.com/huggingface/transformers/pull/23982\r\n\r\nBest,\r\nOrr",
"Hi! @alaradirik,\r\nI'm using transformers==4.30.2 but still encountered the same issue. Any thought on this?\r\n\r\n**Query image:**\r\n\r\n\r\n**Result from colab:**\r\n\r\n\r\n**Result from huggingface:**\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @rafaelpadilla ",
"Hi folks, I've investigated the difference, will be solved in PR below. TLDR: image preprocessing is done differently in the original Colab (involves padding the image to a square), whereas the HF implementation used center cropping. The model itself is fine, logits are exactly the same as original implementation on the same inputs.",
"Hi folks, since OWLv2 was now added in #26668, you will see that results match one-on-one with the original [Google Colab notebook](https://colab.research.google.com/github/google-research/scenic/blob/main/scenic/projects/owl_vit/notebooks/OWL_ViT_minimal_example.ipynb) provided by the authors. \r\n\r\nIf you also want to get one-on-one matching results for OWLv1, then you will need to use `Owlv2Processor` (which internally uses `Owlv2ImageProcessor`) instead of `OwlViTProcessor` as it uses the exact same image preprocessing settings as the Colab notebook. We cannot change this for v1 due to backwards compatibility.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,700
| 1,700
|
NONE
| null |
### System Info
Using huggingface space and google colab
### Who can help?
@adirik
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
cat picture from http://images.cocodataset.org/val2017/000000039769.jpg
remote control image from https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSRUGcH7a3DO5Iz1sknxU5oauEq9T_q4hyU3nuTFHiO0NMSg37x
### Expected behavior
Being excited with the results of OwlVit, I tried to input some random image to see the results.
Having no experience on jax, my first option is to search out on huggingface space.
Given a query of remote control, and a cat picture, I wanted to get picture of remote controls.
https://huggingface.co/spaces/adirik/image-guided-owlvit

The results is not really what I expected (no box on remotes).
Then I checked for results on colab version, if they behave the same way.
https://colab.research.google.com/github/google-research/scenic/blob/main/scenic/projects/owl_vit/notebooks/OWL_ViT_inference_playground.ipynb#scrollTo=AQGAM16fReow

It correctly draw boxes on the remotes.
I am not sure what is happening, which part should I look at to determine what causes this difference?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21206/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21205
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21205/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21205/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21205/events
|
https://github.com/huggingface/transformers/pull/21205
| 1,550,292,538
|
PR_kwDOCUB6oc5ILzft
| 21,205
|
WIP: Added basic eos token based pooling
|
{
"login": "isamu-isozaki",
"id": 23430101,
"node_id": "MDQ6VXNlcjIzNDMwMTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/23430101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isamu-isozaki",
"html_url": "https://github.com/isamu-isozaki",
"followers_url": "https://api.github.com/users/isamu-isozaki/followers",
"following_url": "https://api.github.com/users/isamu-isozaki/following{/other_user}",
"gists_url": "https://api.github.com/users/isamu-isozaki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isamu-isozaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isamu-isozaki/subscriptions",
"organizations_url": "https://api.github.com/users/isamu-isozaki/orgs",
"repos_url": "https://api.github.com/users/isamu-isozaki/repos",
"events_url": "https://api.github.com/users/isamu-isozaki/events{/privacy}",
"received_events_url": "https://api.github.com/users/isamu-isozaki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@ArthurZucker Hi! Just moved to this pr. Just some git issues so switched but this is based on the pr [here](https://github.com/huggingface/transformers/pull/21096)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21205). All of your documentation changes will be reflected on that endpoint.",
"Attempt fixing the bugs now",
"From message on old pr: @ArthurZucker Thanks for the comment! Very interesting. The reason I'm resistant to just using self.config.vocab_size-1 is that when adding new tokens for the textual inversion training, usually we increase the resize_token_embedding method. So then when loading the trained embeddings, self.config.vocab_size-1 is not the eos token id anymore.\r\n\r\nDo you think I should change the logic for resize_token_embeddings and tokenizer instead so that the eos_token_id is always the max? The disadvantage of this is that it'll be way more code.",
"Hey, not really, I would say the least changes the better! ",
"Also for the eos_token ids, you can just set it as an argument in the clip config, and maybe raise some kind of warning if it is not the last? The default should be `config.eos_token_id`",
"@ArthurZucker Thanks for the comment! Good point. Will do that asap",
"ok seems like the bugs are coming from indexing I do when calculating the new clip pooling. I'll try fixing that within this week",
"@isamu-isozaki Thank you for working on this PR. \r\n\r\n@ArthurZucker instead of assuming the eos_token is the last (by id) or relying on the config, isn't it better to look the id up from vocab? \r\n\r\nSomething like:\r\n```\r\n self.bos_token = self.vocab[\"<|startoftext|>\"]\r\n self.eos_token = self.vocab[\"<|endoftext|>\"]\r\n self.pad_token = self.vocab[\"<|endoftext|>\"]\r\n ```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,680
| 1,680
|
NONE
| null |
# What does this PR do?
This PR is still a WIP. This is based on [this issue](https://github.com/huggingface/transformers/issues/21029). The main problem is that when new tokens are added to the tokenizer and text model and learned such as with [textual inversion](https://textual-inversion.github.io/) the clip text model pools at the wrong location as the pooling is not done at the new token location and not the eos token id location
Fixes #21029
## Before submitting
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Models:
- text models: @ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21205/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21205/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21205",
"html_url": "https://github.com/huggingface/transformers/pull/21205",
"diff_url": "https://github.com/huggingface/transformers/pull/21205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21205.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21204
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21204/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21204/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21204/events
|
https://github.com/huggingface/transformers/issues/21204
| 1,550,193,500
|
I_kwDOCUB6oc5cZhNc
| 21,204
|
Typo in XCLIP model
|
{
"login": "Zhilin123",
"id": 29811458,
"node_id": "MDQ6VXNlcjI5ODExNDU4",
"avatar_url": "https://avatars.githubusercontent.com/u/29811458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zhilin123",
"html_url": "https://github.com/Zhilin123",
"followers_url": "https://api.github.com/users/Zhilin123/followers",
"following_url": "https://api.github.com/users/Zhilin123/following{/other_user}",
"gists_url": "https://api.github.com/users/Zhilin123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zhilin123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zhilin123/subscriptions",
"organizations_url": "https://api.github.com/users/Zhilin123/orgs",
"repos_url": "https://api.github.com/users/Zhilin123/repos",
"events_url": "https://api.github.com/users/Zhilin123/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zhilin123/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@Zhilin123 Thank you for reporting. See #21207. However, note that \r\n\r\n```\r\nALL_PRETRAINED_CONFIG_ARCHIVE_MAP is deprecated and will be removed in v5 of Transformers.\r\n```"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
transformers version 4.25.1
### Who can help?
@NielsRogge @ydshieh
There's a typo/mismatch for the xclip model pretrained config archive map between https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/configuration_auto.py#L333
and. https://github.com/huggingface/transformers/blob/main/src/transformers/models/x_clip/configuration_x_clip.py#L27
notice in the first example, there is underscore between X and CLIP whereas in the second example, there isn't.
This leads to an error when initializing automodel
`
huggingface_models = list(ALL_PRETRAINED_CONFIG_ARCHIVE_MAP.keys())
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/configuration_auto.py", line 612, in keys
self._initialize()
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/configuration_auto.py", line 602, in _initialize
mapping = getattr(module, map_name)
File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 1086, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module transformers.models.x_clip has no attribute X_CLIP_PRETRAINED_CONFIG_ARCHIVE_MAP`
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. from transformers import ALL_PRETRAINED_CONFIG_ARCHIVE_MAP
2. huggingface_models = list(ALL_PRETRAINED_CONFIG_ARCHIVE_MAP.keys())
### Expected behavior
Expect no errors to be raised
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21204/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21203
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21203/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21203/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21203/events
|
https://github.com/huggingface/transformers/issues/21203
| 1,550,177,424
|
I_kwDOCUB6oc5cZdSQ
| 21,203
|
rename configuration_utils.PretrainedConfig.max_length to max_generation_length
|
{
"login": "aaronrmm",
"id": 1742879,
"node_id": "MDQ6VXNlcjE3NDI4Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1742879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronrmm",
"html_url": "https://github.com/aaronrmm",
"followers_url": "https://api.github.com/users/aaronrmm/followers",
"following_url": "https://api.github.com/users/aaronrmm/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronrmm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronrmm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronrmm/subscriptions",
"organizations_url": "https://api.github.com/users/aaronrmm/orgs",
"repos_url": "https://api.github.com/users/aaronrmm/repos",
"events_url": "https://api.github.com/users/aaronrmm/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronrmm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for raising an issue. Renaming an argument like this which is so widely used is too breaking a change for us to consider however. ",
"Makes sense. Perhaps there is something that can be done in the mlflow logger instead. I'll raise a different issue if I have any ideas."
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### Feature request
Max length (and min length) is an overloaded term. This renaming would help disambiguate this parameter from tokenizer max lengths when parameters are logged.
https://github.com/huggingface/transformers/blob/862888a35834527fed61beaf42373423ffdbd216/src/transformers/configuration_utils.py#L119
### Motivation
When looking at Mlflow logs for my text classification model, I saw "max length" as one of the parameters. I had to debug to figure out that it was not relevant to classification. This seems to me like the simplest solution to make it clear that it is a parameter that is only relevant to generation.
### Your contribution
I can of course do a PR with the renaming change. However, I am not sure of the downstream implications.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21203/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21202
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21202/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21202/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21202/events
|
https://github.com/huggingface/transformers/issues/21202
| 1,550,121,019
|
I_kwDOCUB6oc5cZPg7
| 21,202
|
batched feature extraction pipeline for GPT-style models
|
{
"login": "davidegraff",
"id": 60193893,
"node_id": "MDQ6VXNlcjYwMTkzODkz",
"avatar_url": "https://avatars.githubusercontent.com/u/60193893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidegraff",
"html_url": "https://github.com/davidegraff",
"followers_url": "https://api.github.com/users/davidegraff/followers",
"following_url": "https://api.github.com/users/davidegraff/following{/other_user}",
"gists_url": "https://api.github.com/users/davidegraff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidegraff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidegraff/subscriptions",
"organizations_url": "https://api.github.com/users/davidegraff/orgs",
"repos_url": "https://api.github.com/users/davidegraff/repos",
"events_url": "https://api.github.com/users/davidegraff/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidegraff/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Narsil ",
"This one is tricky.\r\nIn general GPT-like models should pad on the left, not on the right, meaning your snippet **should** work.\r\nHowever, there's no telling if everything is properly configured simply. (The pipeline does whatever the config is set to do, it doesn't try to do reasoning on it).\r\n\r\nIn theory, everything should be quite transparent if the `padding_side` is properly set on the tokenizer.\r\nWould that solve your issue ?\r\nIf you're unsure, maybe setting `classification_token_ids = 0 if fe.tokenizer.padding_side=\"right\" else -1` could do the trick.\r\n\r\nMaybe if you have a specific model where it doesn't work properly I could take a look ?\r\n\r\nNote: I wasn't aware GPT-like models were good for document embedding.",
"Hi, @davidegraff this might be helpful, \r\n\r\n```\r\nimport torch\r\nfrom transformers import pipeline\r\n\r\nipts = [\"Hi I am human.\", \"The sky\", \"hello there\"]\r\nfe = pipeline(task=\"feature-extraction\",\r\n model=\"gpt2\",\r\n framework=\"pt\",\r\n return_tensors=True)\r\n\r\n# Since gpt2 doesn't have a pad_token\r\nif not fe.tokenizer.special_tokens_map.get(\"pad_token\"):\r\n pad_token = {\"pad_token\":\"<|endoftext|>\"}\r\n fe.tokenizer.add_special_tokens(pad_token)\r\n fe.model.resize_token_embeddings(len(fe.tokenizer))\r\n\r\n# Make sure the padding_side is 'left' (if you open gpt2tokenizer you will find that by default\r\n# the padding_side is 'right')\r\nfe.tokenizer.padding_size = \"left\" #For BERT like models use \"right\"\r\n\r\n# get the outputs\r\nopts = fe(ipts, batch_size=3)\r\n\r\nclassification_token_idx = -1 #For BERT like models use 0(if you want to use the embeddings of [CLS] token)\r\nH = torch.stack([x[0, classification_token_idx, :] for x in opts])\r\n\r\n```\r\n\r\nTo see if batch_size if working or not I ran,\r\n\r\n```\r\n%%timeit -n 100\r\nopts = fe(ipts)\r\n```\r\n>> 143 ms ± 10.8 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)\r\n\r\n```\r\n%%timeit -n 100\r\nopts = fe(ipts, batch_size=3) \r\n```\r\n>> 86.6 ms ± 10.8 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)\r\n\r\nI hope it helps.",
"I see! Thanks for the example @susnato!\r\n\r\nre @Narsil:\r\nMy use-case is a little specialized- I'm looking specifically at LLMs trained on chemical corpora. Frey et al. [[1]] show that you can use the \"embeddings\" from a GPT model trained on molecules in an unsupervised fashion as molecular representations (Fig. 7). They made their model available on the HF hub [[2]], but it seems like there might be an issue with padding based on your explanation:\r\n```python\r\n>>> featurizer = pipeline(\r\n \"feature-extraction\",\r\n model=\"ncfrey/ChemGPT-1.2B\",\r\n framework=\"pt\",\r\n return_tensors=True\r\n)\r\n>>> featurizer.tokenizer.padding_side\r\n\"right\"\r\n```\r\nThe original data preparation code also makes no mention of left-wise padding [[3]]. Loading the individual tokenizer (via `AutoTokenizer`) results in the same thing. Does this mean it was trained incorrectly? Or is this just something I have to be aware of when loading `Tokenizer`s (by adding `tokenizer.padding_side = \"left\"` for GPT-style models)?\r\n\r\n[1]: https://doi.org/10.26434/chemrxiv-2022-3s512\r\n[2]: https://huggingface.co/ncfrey/ChemGPT-1.2B\r\n[3]: https://github.com/ncfrey/litmatter/blob/main/lit_data/lm_data.py#L31",
"> Does this mean it was trained incorrectly?\r\n\r\nI'm not sure how we train \"correctly\" so it'd be hard to train \"incorrectly\". Joke aside, a model is trained a certain way, it's up to the inference to understand how it was done and was it acceptable or not within the framework of how it was trained.\r\n\r\nPadding side in training shouldn't matter in inference, especially for causal LM since, they're supposed to ignore the padding anyway. \r\nNow that's theory, in practice I would definitely run some tests to make sure what's supposed to be actually is correct.\r\n\r\nBut, you could always just override the `padding_side` and see how are the results compared to the non batched, non overridden ones on a subset of examples you know the answer for. That would be my first step at least.\r\n\r\nTo override.\r\n\r\n```python\r\npipe = pipeline(\"feature-extraction\", model=\"cfrey/ChemGPT-1.2B\",\r\n framework=\"pt\",\r\n return_tensors=True)\r\npipe.tokenizer.padding_side = \"left\"\r\n\r\nfor out in pipe(...):\r\n print(out)\r\n```\r\n\r\nFor instance.",
"Thanks for taking the time to explain all this, I really appreciate it!\r\n\r\nThe original paper is sparse on details, so I'm not really sure what the authors are doing when they (1) generate encodings and (2) how (or if) they pad during inference. In the absence of these details, I guess I'm trying to take a principled approach to generate these encodings:\r\n1) a sanity check to make sure no funny business is going on\r\n```python\r\nfeaturizer = pipeline(\r\n \"feature-extraction\", model=\"ncfrey/ChemGPT-1.2B\", framework=\"pt\", return_tensors=True, \r\n)\r\nfeaturizer.tokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\nsfs = [\r\n '[C][C][N][=C][Branch1_1][O][N][C][C][C][C][C][C][C][Ring1][Branch1_3][S][C][Expl=Ring1][=N][C][Branch1_2][C][=O][O-expl]',\r\n '[C][N][Branch1_1][Branch2_2][C][C][C][C][C][C][Ring1][Branch1_2][S][Branch1_2][C][=O][Branch1_2][C][=O][C][=C][C][=C][Branch2_1][Ring1][Branch2_2][N][C][Branch1_2][C][=O][C][C][N][C][Branch1_2][C][=O][C@Hexpl][C][C][=C][C][C@@Hexpl][Ring1][Branch1_2][C][Ring1][Branch2_3][=O][C][=C][Ring2][Ring1][Branch1_2]',\r\n '[C][C@Hexpl][C][C][C@Hexpl][Branch2_1][Ring1][Branch1_3][NH+expl][C][C][C][C@Hexpl][Branch1_1][=N][C@Hexpl][Branch1_1][C][O][C][=N][C][=C][N][Ring1][Branch1_1][C][C][Ring1][=C][C][Ring2][Ring1][Ring1]',\r\n '[N][/C][Branch2_1][Ring1][Ring2][C][N][C][Branch1_2][C][=O][C@@Hexpl][C][C][=C][C][=C][C][=C][Ring1][Branch1_2][S][Ring1][Branch2_2][=N][\\\\O]'\r\n]\r\nfeaturizer.tokenizer.padding_side = 'right'\r\nX_unpadded_r = torch.stack([H[0, -1, :] for H in featurizer(sfs)])\r\nfeaturizer.tokenizer.padding_side = 'left'\r\nX_unpadded_l = torch.stack([H[0, -1, :] for H in featurizer(sfs)])\r\ntorch.allclose(X_unpadded_r, X_unpadded_l)\r\n# True\r\n```\r\n2) now seeing the effects of batching\r\n```python\r\nfeaturizer.tokenizer.padding_side = 'right'\r\nX_padded_r = torch.stack([H[0, -1, :] for H in featurizer(sfs, batch_size=4)])\r\nfeaturizer.tokenizer.padding_side = 'left'\r\nX_padded_l = torch.stack([H[0, -1, :] for H in featurizer(sfs, batch_size=4)])\r\ntorch.allclose(X_padded_r, X_padded_l)\r\n# False\r\ntorch.allclose(X_unpadded_r, X_padded_l), torch.allclose(X_unpadded_r, X_padded_r)\r\n# (False, False)\r\n```\r\nso while it's expected that left vs. right padding produces different results if we take the same embedding (i.e., the last token), it's is surprising that _neither_ of these is the same as the unpadded results. The simple answer in this situation is likely just \"then don't batch,\" but there are significant performance gains to be had when utilizing batching. Do you have any advice here? Thanks again!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
NONE
| null |
### Feature request
it would be nice to support feature extraction of batched input for GPT-style models using `Pipeline`s
### Motivation
I'm currently trying to generate encodings of a large number of sentences using LLMs. I.e.,:
```python
classification_token_idx: int
fe = pipeline("feature-extraction", model="some-LLM", framework="pt", return_tensors=True)
inputs = [...]
output = fe(inputs)
H = torch.stack([x[0, classification_token_idx, :] for x in output])
```
where depending on whether I'm using a BERT-style or GPT-style model, `classification_token_idx` will be either 0 or -1, respectively. My use-case can greatly benefit from batching, but the adapted snippet no longer works for GPT-style models:
```python
...
output = fe(inputs, batch_size=BATCH_SIZE)
H = torch.stack([x[0, classification_token_idx, :] for x in output])
```
In a batch of sequences, the 0th index of a sequence will always be the `[CLS]` token regardless of padding. However, the last index of a sequence in a padded batch of sequences will most likely be a `[PAD]` token rather than the true last token of the sequence. Using the `Pipeline` interface with a GPT-style model makes it non-trivial to extract features *and* take advantage of input batching, leaving users with three options:
1. do not batch. Possibly much slower, but reliable
2. do not use a `Pipeline`. More robust, but fairly cumbersome to implement and will likely be repeated across most users
3. implement a custom `Pipeline`. The most "elegant" solution (IMO), but one that should arguably be in the huggingface library (and is the point of this feature request.)
### Your contribution
I doubt I'm the person for the job.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21202/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21201
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21201/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21201/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21201/events
|
https://github.com/huggingface/transformers/pull/21201
| 1,550,066,102
|
PR_kwDOCUB6oc5ILEOI
| 21,201
|
Fix code example in training tutorial
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
MEMBER
| null |
The Keras [section](https://huggingface.co/docs/transformers/main/en/training#train-a-tensorflow-model-with-keras) of the training tutorial throws an error because it tokenizes `dataset["text"]` instead of `dataset["sentence"]`. There is no `text` column in this dataset.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21201/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21201",
"html_url": "https://github.com/huggingface/transformers/pull/21201",
"diff_url": "https://github.com/huggingface/transformers/pull/21201.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21201.patch",
"merged_at": 1674229096000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21200
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21200/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21200/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21200/events
|
https://github.com/huggingface/transformers/pull/21200
| 1,549,803,144
|
PR_kwDOCUB6oc5IKLIN
| 21,200
|
Fix task summary doctest
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
MEMBER
| null |
The doctests are failing for the updated task summary page because outputs weren't included in the code examples. This PR adds real inputs that can be pipelined instead of the generic `("path/to/data/file")`. I skipped the `text-generation` pipeline, but let me know if you'd prefer setting a seed for it so we can still reliably generate an output.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21200/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21200",
"html_url": "https://github.com/huggingface/transformers/pull/21200",
"diff_url": "https://github.com/huggingface/transformers/pull/21200.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21200.patch",
"merged_at": 1674237488000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21199
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21199/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21199/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21199/events
|
https://github.com/huggingface/transformers/pull/21199
| 1,549,794,414
|
PR_kwDOCUB6oc5IKJKB
| 21,199
|
Remove all hf-internal-testing checkpoints that can be removed
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
This PR continues the work on docstrings and removes all checkpoints from the hf-internal-testing org where they can be removed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21199/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21199",
"html_url": "https://github.com/huggingface/transformers/pull/21199",
"diff_url": "https://github.com/huggingface/transformers/pull/21199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21199.patch",
"merged_at": 1674238799000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21198
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21198/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21198/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21198/events
|
https://github.com/huggingface/transformers/pull/21198
| 1,549,600,846
|
PR_kwDOCUB6oc5IJe-4
| 21,198
|
[Whispe] Fix pipeline after timestamp merges
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Fix the ASR pipeline for whisper without timestamps by ensuring the `WhisperTimestampProcessor`is not added to the list of logit processors when it is not requested.
Fixes #21179
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21198/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21198",
"html_url": "https://github.com/huggingface/transformers/pull/21198",
"diff_url": "https://github.com/huggingface/transformers/pull/21198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21198.patch",
"merged_at": 1674207100000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21197
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21197/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21197/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21197/events
|
https://github.com/huggingface/transformers/pull/21197
| 1,549,474,312
|
PR_kwDOCUB6oc5IJDtZ
| 21,197
|
Flax dtype-dependent numerical masking
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger this solution was discussed on Slack with the Flax team, hence no added Flax reviewers :)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
MEMBER
| null |
# What does this PR do?
Fixes #21176
For some models, our Flax numerical masking was incompatible with the desired variable type. This PR fixes it by selecting a numerical mask that is the minimum for the corresponding variable type.
This PR is akin to #17306 for PT. Thank you @LysandreJik and @ydshieh for pointing it out 🙏
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21197/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21197/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21197",
"html_url": "https://github.com/huggingface/transformers/pull/21197",
"diff_url": "https://github.com/huggingface/transformers/pull/21197.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21197.patch",
"merged_at": 1674146623000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21196
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21196/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21196/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21196/events
|
https://github.com/huggingface/transformers/pull/21196
| 1,549,407,611
|
PR_kwDOCUB6oc5II1Q6
| 21,196
|
Enabling live `automatic-speech-recognition` asr for Whisper.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> LGTM but we could add your script somewhere no? Seems like it was asked\r\n\r\nWhere ? \r\nThe `examples` is more focused on fine-tuning/training. It could be in the docstring of ASR pipeline ( which already contain tons of information).\r\n\r\nUltimately both examples are nice-to-have things, but definitely not something we want to support as the rest of core transformers iirc the discussions when this was added. Too many specific things, like capturing the correct mic is a hard job, then the amount of features that might be wanted could be very intense, and it's not really the goal of `tranformers` to maintain this. This is more a showcase and because using ffmpeg enables a relatively short code base for support.\r\nMaybe something that could be more prominently be features in `speechbox`. @patrickvonplaten for an opinion on this ?"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Enables live ASR for Whisper.
Inference is slower for ASR than CTC models though, so these demos come with
come caveat:
**IT will not be live on too small hardware**. Just because it will fall behind
in inference times.
Simplest fix would be to increase the `stream_chunk_s` parameter. That will reduce
the "liveliness" of the inference but would put less strain on the hardware.
Another more complex fix (outside of these short scripts) would be to keep track
of real time, and **skip** some inferences in the pipelines when we are too late.
Live script:
```python
import sys
import numpy as np
from transformers import pipeline
from transformers.pipelines.audio_utils import ffmpeg_microphone_live
from curses import wrapper
import curses
def main():
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-base", device=0)
sampling_rate = pipe.feature_extractor.sampling_rate
chunk_length_s = 5
stream_chunk_s = 0.1
mic = ffmpeg_microphone_live(
sampling_rate=sampling_rate,
chunk_length_s=chunk_length_s,
stream_chunk_s=stream_chunk_s, # , stride_length_s=(1, 0.1)
)
print("Start talking...")
stdscr = curses.initscr()
curses.noecho()
curses.cbreak()
text = ""
for item in pipe(mic):
displayed = text + item["text"]
if not item["partial"][0]:
text += item["text"]
stdscr.addstr(0, 0, displayed)
stdscr.clrtoeol()
stdscr.refresh()
if __name__ == "__main__":
wrapper(main())
```
Simpler script:
```python
import datetime
import sys
from transformers import pipeline
from transformers.pipelines.audio_utils import ffmpeg_microphone_live
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-base", device=0)
sampling_rate = pipe.feature_extractor.sampling_rate
start = datetime.datetime.now()
chunk_length_s = 5
stream_chunk_s = 0.1
mic = ffmpeg_microphone_live(
sampling_rate=sampling_rate,
chunk_length_s=chunk_length_s,
stream_chunk_s=stream_chunk_s,
)
print("Start talking...")
for item in pipe(mic):
sys.stdout.write("\033[K")
print(item["text"], end="\r")
if not item["partial"][0]:
print("")
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21196/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21196",
"html_url": "https://github.com/huggingface/transformers/pull/21196",
"diff_url": "https://github.com/huggingface/transformers/pull/21196.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21196.patch",
"merged_at": 1674206126000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21195
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21195/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21195/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21195/events
|
https://github.com/huggingface/transformers/pull/21195
| 1,549,276,772
|
PR_kwDOCUB6oc5IIYac
| 21,195
|
Add class properties with warnings
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Adds properties with deprecations warnings to image processors for backwards compatibility. This resolves issues users had when trying to reference a deprecated property e.g. `image_processor.reduce_labels`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21195/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21195",
"html_url": "https://github.com/huggingface/transformers/pull/21195",
"diff_url": "https://github.com/huggingface/transformers/pull/21195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21195.patch",
"merged_at": 1674499528000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21194
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21194/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21194/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21194/events
|
https://github.com/huggingface/transformers/pull/21194
| 1,549,240,163
|
PR_kwDOCUB6oc5IIQUl
| 21,194
|
Rename GLPN image processor tests
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Renames GLPN feature extractor tests. Missed file in #21140
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21194/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21194",
"html_url": "https://github.com/huggingface/transformers/pull/21194",
"diff_url": "https://github.com/huggingface/transformers/pull/21194.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21194.patch",
"merged_at": 1674139568000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21193
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21193/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21193/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21193/events
|
https://github.com/huggingface/transformers/pull/21193
| 1,549,239,255
|
PR_kwDOCUB6oc5IIQIP
| 21,193
|
[`CVT`] Fix module initialization issue
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Regardless of anything else, the initialization should always be done in `_init_weights`. This is the reason we have many flaky failures with tests that check slow/fast init give the same results for instance.",
"Thanks everyone for double checking! \r\n",
"Here is another explanation on why we should centralise weights initialization under `init_weights` (i.e. a more condensed explanation of https://github.com/huggingface/transformers/pull/20803#discussion_r1059138540 for anyone that wants to know more about the problem)\r\n\r\nRegardless if you are in a GPU or CPU, `from_pretrained`[ calls `model = cls(config, *model_args, **model_kwargs) ` at some point under the hood,](https://github.com/huggingface/transformers/blob/b9403e951661b53630afd95166874f75ede885c4/src/transformers/modeling_utils.py#L2360) (i.e. calls `model.__init__`) that will sequentially call `__init__` functions of each submodule of the model. This is called on CPU, sometimes on `meta` if `device_map` is enabled.\r\n\r\nBefore this PR, this caused calling `trunc_normal` for `CVT` from `torch.nn` that is not supported under `fp16`.\r\n\r\nSince `init_weights` is not called by `from_pretrained` - ([`_fast_init` is always set to `True`](https://github.com/huggingface/transformers/blob/b9403e951661b53630afd95166874f75ede885c4/src/transformers/modeling_utils.py#L2002) so `transformers` models never calls `init_weights` if `from_pretrained` is called, except if a user forces to do so - but there is no benefit doing it), we should centralise all weights initialization inside this function, therefore avoid calling this function when it is not needed."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the issue described in the PR https://github.com/huggingface/transformers/pull/20803 and this comment: https://github.com/huggingface/transformers/pull/20803#discussion_r1059138540 for `CVT`
Before this PR if a user wants to initialize CVT model in half precision with the example script below they will encounter an issue that is hard to interpret:
```
from transformers import AutoFeatureExtractor, CvtForImageClassification
from PIL import Image
import requests
import torch
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('microsoft/cvt-13')
model = CvtForImageClassification.from_pretrained('microsoft/cvt-13', torch_dtype=torch.float16).to(0)
inputs = feature_extractor(images=image, return_tensors="pt").to(0, torch.float16)
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Error message:
```
RuntimeError: "erfinv_vml_cpu" not implemented for 'Half'
```
The reason of the error is described in https://github.com/huggingface/transformers/pull/20803#discussion_r1059138540
Therefore this PR circumvent this issue by forcing the `cls_token` module to be initialized on the correct place.
All slow tests pass
cc @sgugger @ydshieh
If this PR gets merged, there should be no more modules in `transformers` that will be initialized with `trunc_normal_` outside `init_weights` method
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21193/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21193",
"html_url": "https://github.com/huggingface/transformers/pull/21193",
"diff_url": "https://github.com/huggingface/transformers/pull/21193.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21193.patch",
"merged_at": 1674146199000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21192
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21192/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21192/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21192/events
|
https://github.com/huggingface/transformers/pull/21192
| 1,549,135,028
|
PR_kwDOCUB6oc5IH5EV
| 21,192
|
Fix device issue in `UperNetModelIntegrationTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Fix device issue in `UperNetModelIntegrationTest`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21192/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21192",
"html_url": "https://github.com/huggingface/transformers/pull/21192",
"diff_url": "https://github.com/huggingface/transformers/pull/21192.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21192.patch",
"merged_at": 1674134774000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21191
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21191/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21191/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21191/events
|
https://github.com/huggingface/transformers/pull/21191
| 1,549,048,472
|
PR_kwDOCUB6oc5IHmMT
| 21,191
|
Generate: documented function to compute the transition scores
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger merging as the failing test is a known flaky test (`tests/models/auto/test_modeling_auto.py::AutoModelTest::test_from_pretrained_dynamic_model_distant`)",
"Thanks! This function is super helpful for my use case.",
"OMG, I'm just hacking on making something like this. Thrilled to discover this function. Super helpful!"
] | 1,674
| 1,706
| 1,674
|
MEMBER
| null |
# What does this PR do?
Fixes #18616; Addresses comments in #5164, #20008 (and in a few other issues that I've lost track of).
## The issue
A few users would like to have a simple function to obtain the transition scores (i.e. the logits for each selected token at generate time). This is very useful for exploring the generated contents and simplifies the construction of powerful color-coded interfaces (e.g. [this one](https://joel.tools/codegen/)). It is also commonly requested to compare our models against OpenAI's.
We had a function for that in PT, `compute_beam_transition_scores`, but it was unknown to most users. This is because it was limited to Beam-based approaches, was not in our documents, and had no examples.
## The solution
This PR upgrades the function above to a first-class citizen 🥇 :
1. Makes it compatible with all generation strategies (e.g. Sample)
2. Adds a flag to renormalize the logits before fetching the right ones, which is a frequent downstream use
3. Adds it to the documentation
4. Populates the docstring with examples for which I've got questions a few times (how to print the token probabilities and how to recompute the score of the sequences in beam search)
In the process, I've decided to update the name of the function (`compute_beam_transition_scores` -> `compute_transition_scores`), to match better what it does. Although this technically breaks the API, the function was not part of our documented functions and, given the number of related issues, I'd say it was mostly unknown.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21191/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21191/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21191",
"html_url": "https://github.com/huggingface/transformers/pull/21191",
"diff_url": "https://github.com/huggingface/transformers/pull/21191.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21191.patch",
"merged_at": 1674219002000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21190
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21190/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21190/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21190/events
|
https://github.com/huggingface/transformers/pull/21190
| 1,549,031,507
|
PR_kwDOCUB6oc5IHicD
| 21,190
|
Update year 2020 to 2023
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21190). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Update year 2020 to 2023 for a single file
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21190/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21190",
"html_url": "https://github.com/huggingface/transformers/pull/21190",
"diff_url": "https://github.com/huggingface/transformers/pull/21190.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21190.patch",
"merged_at": 1674130588000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21189
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21189/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21189/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21189/events
|
https://github.com/huggingface/transformers/pull/21189
| 1,549,009,984
|
PR_kwDOCUB6oc5IHds2
| 21,189
|
workaround documentation rendering bug
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
In the doc comments for a number of our models, the following occurs:
```text
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
```
The documentation renderer sees the `>` from `>= 7.5 (Volta)` as starting a quote. The resulting docs look like this:
<img width="480" alt="Screen Shot 2023-01-19 at 12 48 38" src="https://user-images.githubusercontent.com/346853/213434848-2b9d8ea0-6975-43a6-984c-3ec501487ac6.png">
To workaround this issue, I simply put the `>= 7.5` portion inside backticks everywhere in the code (even when it doesn't occur at the beginning of the line).
Arguably, this should be fixed in the documentation tools instead.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21189/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21189",
"html_url": "https://github.com/huggingface/transformers/pull/21189",
"diff_url": "https://github.com/huggingface/transformers/pull/21189.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21189.patch",
"merged_at": 1674132660000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21188
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21188/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21188/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21188/events
|
https://github.com/huggingface/transformers/pull/21188
| 1,548,993,837
|
PR_kwDOCUB6oc5IHaJ_
| 21,188
|
hertz is already per second
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Small documentation update. The sampling rate was described as "Hertz per second", but hertz (usually not capitalized) already means per second.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21188/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21188/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21188",
"html_url": "https://github.com/huggingface/transformers/pull/21188",
"diff_url": "https://github.com/huggingface/transformers/pull/21188.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21188.patch",
"merged_at": 1674141669000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21187
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21187/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21187/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21187/events
|
https://github.com/huggingface/transformers/pull/21187
| 1,548,914,776
|
PR_kwDOCUB6oc5IHIw5
| 21,187
|
[Whisper] Fix timestamp processor
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21187). All of your documentation changes will be reflected on that endpoint.",
"Tested with a concatenated librispeech (clean, test, 5.4 hours), took 393.1348168849945 seconds, and with a WER 0.030776774096215136. So in that case not really sure why we are performing better.\r\nOpenai took 2661.047640323639 and had a wer of 0.2589624153733004. Which is pretty interesting (the model is `large`) so a `x6.7`speed\r\n\r\n",
"Will open a PR for the other fix "
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Mostly adds condition when looking in the past and in the futur based on timing information.
Fixes the tests
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21187/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21187",
"html_url": "https://github.com/huggingface/transformers/pull/21187",
"diff_url": "https://github.com/huggingface/transformers/pull/21187.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21187.patch",
"merged_at": 1674141956000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21186
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21186/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21186/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21186/events
|
https://github.com/huggingface/transformers/pull/21186
| 1,548,787,087
|
PR_kwDOCUB6oc5IGs7X
| 21,186
|
Add Japanese translation index.mdx
|
{
"login": "kambehmw",
"id": 22996144,
"node_id": "MDQ6VXNlcjIyOTk2MTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/22996144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kambehmw",
"html_url": "https://github.com/kambehmw",
"followers_url": "https://api.github.com/users/kambehmw/followers",
"following_url": "https://api.github.com/users/kambehmw/following{/other_user}",
"gists_url": "https://api.github.com/users/kambehmw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kambehmw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kambehmw/subscriptions",
"organizations_url": "https://api.github.com/users/kambehmw/orgs",
"repos_url": "https://api.github.com/users/kambehmw/repos",
"events_url": "https://api.github.com/users/kambehmw/events{/privacy}",
"received_events_url": "https://api.github.com/users/kambehmw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ArthurZucker could you have a look?",
"Thanks a lot for this 🚀 "
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds Japanese translation to index.mdx
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18413
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21186/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21186",
"html_url": "https://github.com/huggingface/transformers/pull/21186",
"diff_url": "https://github.com/huggingface/transformers/pull/21186.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21186.patch",
"merged_at": 1674147208000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.