url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/21890
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21890/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21890/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21890/events
|
https://github.com/huggingface/transformers/issues/21890
| 1,606,034,796
|
I_kwDOCUB6oc5fuiVs
| 21,890
|
[Time-Series] Autoformer - Transformer For Time-Series Forecasting
|
{
"login": "elisim",
"id": 17675462,
"node_id": "MDQ6VXNlcjE3Njc1NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/17675462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elisim",
"html_url": "https://github.com/elisim",
"followers_url": "https://api.github.com/users/elisim/followers",
"following_url": "https://api.github.com/users/elisim/following{/other_user}",
"gists_url": "https://api.github.com/users/elisim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elisim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elisim/subscriptions",
"organizations_url": "https://api.github.com/users/elisim/orgs",
"repos_url": "https://api.github.com/users/elisim/repos",
"events_url": "https://api.github.com/users/elisim/events{/privacy}",
"received_events_url": "https://api.github.com/users/elisim/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[] | 1,677
| 1,686
| 1,686
|
CONTRIBUTOR
| null |
# Model Description
Following #20903 and #21099, Autoformer is the next Transformer in the series, published in NIPS 21.
* Paper: [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting
](https://arxiv.org/abs/2106.13008)
* Model implementation: https://github.com/thuml/Autoformer
I would like to implement the model :)
Thank you,
Eli
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
@wuhaixu2016 - repository creator
@NielsRogge @kashif
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21890/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21889
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21889/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21889/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21889/events
|
https://github.com/huggingface/transformers/pull/21889
| 1,605,722,082
|
PR_kwDOCUB6oc5LElAc
| 21,889
|
Add `inputs_embeds` functionality when generating with BioGPT
|
{
"login": "sidkiblawi",
"id": 9060789,
"node_id": "MDQ6VXNlcjkwNjA3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9060789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sidkiblawi",
"html_url": "https://github.com/sidkiblawi",
"followers_url": "https://api.github.com/users/sidkiblawi/followers",
"following_url": "https://api.github.com/users/sidkiblawi/following{/other_user}",
"gists_url": "https://api.github.com/users/sidkiblawi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sidkiblawi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sidkiblawi/subscriptions",
"organizations_url": "https://api.github.com/users/sidkiblawi/orgs",
"repos_url": "https://api.github.com/users/sidkiblawi/repos",
"events_url": "https://api.github.com/users/sidkiblawi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sidkiblawi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"review request: @gante ",
"Thanks for your contribution!"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR extends https://github.com/huggingface/transformers/pull/21405 by @gante to BioGPT, making it accept inputs_embeds when generating.
```
import torch
from transformers import BioGptTokenizer, BioGptForCausalLM
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
inputs_embeds = torch.rand((1, 1, 1024)) # 1 dummy soft-prompt token embeddings
attention_mask = torch.ones(inputs_embeds.shape[:2], dtype=torch.long)
filler_input_ids = torch.LongTensor([[model.config.bos_token_id]])
model.generate(filler_input_ids,attention_mask = attention_mask, inputs_embeds=inputs_embeds, max_new_tokens=300, num_beams=4)
```
# Who can Review?
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21889/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21889",
"html_url": "https://github.com/huggingface/transformers/pull/21889",
"diff_url": "https://github.com/huggingface/transformers/pull/21889.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21889.patch",
"merged_at": 1677760999000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21888
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21888/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21888/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21888/events
|
https://github.com/huggingface/transformers/issues/21888
| 1,605,668,576
|
I_kwDOCUB6oc5ftI7g
| 21,888
|
`shuffle` argument when initializing Sampler-related classes in `trainer.py`
|
{
"login": "gugarosa",
"id": 4120639,
"node_id": "MDQ6VXNlcjQxMjA2Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gugarosa",
"html_url": "https://github.com/gugarosa",
"followers_url": "https://api.github.com/users/gugarosa/followers",
"following_url": "https://api.github.com/users/gugarosa/following{/other_user}",
"gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions",
"organizations_url": "https://api.github.com/users/gugarosa/orgs",
"repos_url": "https://api.github.com/users/gugarosa/repos",
"events_url": "https://api.github.com/users/gugarosa/events{/privacy}",
"received_events_url": "https://api.github.com/users/gugarosa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"`shuffle=True` is the default for this DistributedSampler (see [doc](https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler)). There is no need to pass it.",
"You are right, my bad. I overlooked the documentation."
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### Feature request
Add support for an additional keyword argument (`shuffle`) in `DistributedSampler` (https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L839).
### Motivation
It samples data in a deterministic way across multiple processes. However, when using a contiguous array of pre-encoded data, using a shuffler in the sampler could avoid sequential input_ids for extreme long files and possibly diminish the number of spikes in the loss.
### Your contribution
Would you consider worth in adding it? In my use case, I have to re-write the whole `_get_train_sampler()` just to set `shuffle=True`.
If necessary, I can contribute with a PR.
Thanks for your attention and best regards,
Gustavo.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21888/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21887
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21887/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21887/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21887/events
|
https://github.com/huggingface/transformers/pull/21887
| 1,605,566,654
|
PR_kwDOCUB6oc5LEDAf
| 21,887
|
Mark pipeline tests to skip them easily
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
This PR re-introduces the `is_pipeline_test` marker to easily flag pipeline tests. The problem is that since #21516 pipeline tests aren't isolated in the pipelines folder anymore (it looks like it but there is an inheritance of the pipeline tester in all model test classes). Thus the `is_pipeline_test` will allow us to flag those tests.
By setting the RUN_PIPELINE_TESTS environment variable to False, we can skip all pipeline tests. Contrarily to other similar env variables, this one defaults to True. This is because it's very annoying to have to remember to add a `RUN_PIPELINE_TESTS=yes` before the pytest command when trying to debug things locally. I'll actually propose to switch the defaults of all other env variables in a follow-up PR. The main thing is to set it to False when running test jobs unrelated to pipelines (which this PR does).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21887/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21887",
"html_url": "https://github.com/huggingface/transformers/pull/21887",
"diff_url": "https://github.com/huggingface/transformers/pull/21887.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21887.patch",
"merged_at": 1677772537000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21886
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21886/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21886/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21886/events
|
https://github.com/huggingface/transformers/pull/21886
| 1,605,563,745
|
PR_kwDOCUB6oc5LECYg
| 21,886
|
Fix pipe comm test
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
PR #21600 added task `zero-shot-audio-classification`:
```python
"zero-shot-audio-classification": {
"impl": ZeroShotAudioClassificationPipeline,
"tf": (TFAutoModel,) if is_tf_available() else (),
"pt": (AutoModel,) if is_torch_available() else (),
"default": {
"model": {
"pt": ("laion/clap-htsat-fused", "f39917b"),
}
},
"type": "multimodal",
},
```
But this fails the test `tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf` as there is `tf` key under the `task`, but not under `task/default`. There is a check in that test method:
```python
if len(relevant_auto_classes) == 0:
# task has no default
logger.debug(f"{task} in {framework} has no default")
return
```
So I decide to set `"tf": (),` for now.
(We have no `TFClap` yet)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21886/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21886",
"html_url": "https://github.com/huggingface/transformers/pull/21886",
"diff_url": "https://github.com/huggingface/transformers/pull/21886.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21886.patch",
"merged_at": 1677703947000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21885
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21885/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21885/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21885/events
|
https://github.com/huggingface/transformers/issues/21885
| 1,605,489,098
|
I_kwDOCUB6oc5fsdHK
| 21,885
|
Text-classification example does not work as is on 4.27.0.dev
|
{
"login": "jojivk73",
"id": 14943401,
"node_id": "MDQ6VXNlcjE0OTQzNDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/14943401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jojivk73",
"html_url": "https://github.com/jojivk73",
"followers_url": "https://api.github.com/users/jojivk73/followers",
"following_url": "https://api.github.com/users/jojivk73/following{/other_user}",
"gists_url": "https://api.github.com/users/jojivk73/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jojivk73/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jojivk73/subscriptions",
"organizations_url": "https://api.github.com/users/jojivk73/orgs",
"repos_url": "https://api.github.com/users/jojivk73/repos",
"events_url": "https://api.github.com/users/jojivk73/events{/privacy}",
"received_events_url": "https://api.github.com/users/jojivk73/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 ",
"My bad :(. It was trying to use for GPT-J instead of distilbert-base-cased",
"Ah yes - CLM models generally need some modifications to transfer to text classification tasks, including adding a padding token. Their performance is also usually worse. Using a MLM (masked language model, like BERT/RoBERTa/DistilBERT/DeBERTa) base instead will work much better!"
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
### System Info
Transformer : 4.27.0.dev
Example in : examples/tensorflow/text-classification/run_glue.py
fails when running example given in README.
"""
python run_glue.py \
--model_name_or_path distilbert-base-cased \
--task_name mnli \
--do_train \
--do_eval \
--do_predict \
--output_dir outdir \
--predict_file data_to_predict.json
"""
Issues :
1. missing data_to_predict.json>
2. output_dir not mentioned in README.
2. Fails with following error without --predict_file + --output_dir

### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the script given in description
### Expected behavior
The task run without issues.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21885/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21884
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21884/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21884/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21884/events
|
https://github.com/huggingface/transformers/issues/21884
| 1,605,480,960
|
I_kwDOCUB6oc5fsbIA
| 21,884
|
Add finetuning task support for GPT-J to 4.27.0.dev
|
{
"login": "jojivk73",
"id": 14943401,
"node_id": "MDQ6VXNlcjE0OTQzNDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/14943401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jojivk73",
"html_url": "https://github.com/jojivk73",
"followers_url": "https://api.github.com/users/jojivk73/followers",
"following_url": "https://api.github.com/users/jojivk73/following{/other_user}",
"gists_url": "https://api.github.com/users/jojivk73/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jojivk73/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jojivk73/subscriptions",
"organizations_url": "https://api.github.com/users/jojivk73/orgs",
"repos_url": "https://api.github.com/users/jojivk73/repos",
"events_url": "https://api.github.com/users/jojivk73/events{/privacy}",
"received_events_url": "https://api.github.com/users/jojivk73/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
NONE
| null |
### Feature request
I have tried running fine-tuning tasks including QA, Summarization and text classification with GPT-J. As of now only QA could be made working with a minor hack to use distillbert tokenizer. Not sure if this is best. Other tasks do not work for GPT-J. Any help appreciated.
Thanks
### Motivation
Fine-tuning for LLMs (GPT-J)
### Your contribution
I need to understand the HF transformer code to make any contribution. Will submit PRs if I see possible ways to improve.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21884/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21883
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21883/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21883/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21883/events
|
https://github.com/huggingface/transformers/pull/21883
| 1,605,412,666
|
PR_kwDOCUB6oc5LDh6e
| 21,883
|
Fix `WhisperModelTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
### Fix `test_model_parallelism` for `Whisper`
A merged PR `Add WhisperTokenizerFast` (#21222) (02/21) started failing `test_model_parallelism` .
That PR didn't change relevant test/modeling file, but just changed the model tester's vocab size from `99` to `200`.
When I traced it, I found at this places inside `WhisperDecoder`
```python
inputs_embeds = self.embed_tokens(input_ids)
positions = self.embed_positions(input_ids, ...)
hidden_states = inputs_embeds + positions
```
- in previous commit, `embed_tokens` and `embed_positions` weight matrices are on the same GPU `1`.
- After that PR, one is on GPU `0` another being on GPU `1`.
I fixed the issue by adding
```python
# Needs higher percentages after model tester's vocab_size is changed to 200 (PR #21222)
model_split_percents = [0.8, 0.9]
```
### Fix `test_torchscript_*` for `Whisper`
PR #21298 added an optional argument `attention_mask` to `WhisperModel` model classes. `torchscript` tests need a bit change to make it work.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21883/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21883",
"html_url": "https://github.com/huggingface/transformers/pull/21883",
"diff_url": "https://github.com/huggingface/transformers/pull/21883.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21883.patch",
"merged_at": 1677699687000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21882
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21882/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21882/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21882/events
|
https://github.com/huggingface/transformers/pull/21882
| 1,605,373,543
|
PR_kwDOCUB6oc5LDZiA
| 21,882
|
Fix Gradient checkpointing bug BigBird
|
{
"login": "saswatmeher",
"id": 35535056,
"node_id": "MDQ6VXNlcjM1NTM1MDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/35535056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saswatmeher",
"html_url": "https://github.com/saswatmeher",
"followers_url": "https://api.github.com/users/saswatmeher/followers",
"following_url": "https://api.github.com/users/saswatmeher/following{/other_user}",
"gists_url": "https://api.github.com/users/saswatmeher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saswatmeher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saswatmeher/subscriptions",
"organizations_url": "https://api.github.com/users/saswatmeher/orgs",
"repos_url": "https://api.github.com/users/saswatmeher/repos",
"events_url": "https://api.github.com/users/saswatmeher/events{/privacy}",
"received_events_url": "https://api.github.com/users/saswatmeher/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @saswatmeher -- looking at the diff and our CI, I'd say something went wrong with `make fixup`. My recommendation would be to update the installation on your end (`pip install -e .[dev] --upgrade`), for instance `ruff` got a recent update.\r\n\r\nAnd then run `make fixup` again :D",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes issue https://github.com/huggingface/transformers/issues/21737 for BigBird.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.(#21737 )
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @gante, @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21882/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21882",
"html_url": "https://github.com/huggingface/transformers/pull/21882",
"diff_url": "https://github.com/huggingface/transformers/pull/21882.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21882.patch",
"merged_at": 1677697804000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21881
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21881/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21881/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21881/events
|
https://github.com/huggingface/transformers/pull/21881
| 1,605,278,318
|
PR_kwDOCUB6oc5LDE02
| 21,881
|
Add check for different embedding types in examples
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
MEMBER
| null |
This PR fixes an issue that occurred because we did two things at the same time: We started transitioning our models to use `keras.Embedding` layers, but also added code to the examples to only resize embeddings when necessary by checking `model.get_input_embeddings().weight.shape`. However, because of the transition, the relevant variable became `model.get_input_embeddings().embeddings` in some cases!
To fix this, I added a check for both types of embeddings. When the transition is complete, we can remove the `.weight` code path and only use `.embeddings`.
I've checked that all the affected examples run without errors using the command supplied in the README following this PR.
Fixes #21865
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21881/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21881",
"html_url": "https://github.com/huggingface/transformers/pull/21881",
"diff_url": "https://github.com/huggingface/transformers/pull/21881.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21881.patch",
"merged_at": 1677689826000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21880
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21880/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21880/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21880/events
|
https://github.com/huggingface/transformers/pull/21880
| 1,605,273,763
|
PR_kwDOCUB6oc5LDD2l
| 21,880
|
[Refactor] Relative imports wherever we can
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Cleans up our code. Mostly use relative imports with submodules instead of global imports.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21880/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21880",
"html_url": "https://github.com/huggingface/transformers/pull/21880",
"diff_url": "https://github.com/huggingface/transformers/pull/21880.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21880.patch",
"merged_at": 1677746742000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21879
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21879/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21879/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21879/events
|
https://github.com/huggingface/transformers/pull/21879
| 1,605,272,872
|
PR_kwDOCUB6oc5LDDqn
| 21,879
|
Make loading of pretrained gpt2 faster by avoiding initialization of Conv1D's weights
|
{
"login": "twaka",
"id": 8081197,
"node_id": "MDQ6VXNlcjgwODExOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8081197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/twaka",
"html_url": "https://github.com/twaka",
"followers_url": "https://api.github.com/users/twaka/followers",
"following_url": "https://api.github.com/users/twaka/following{/other_user}",
"gists_url": "https://api.github.com/users/twaka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/twaka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/twaka/subscriptions",
"organizations_url": "https://api.github.com/users/twaka/orgs",
"repos_url": "https://api.github.com/users/twaka/repos",
"events_url": "https://api.github.com/users/twaka/events{/privacy}",
"received_events_url": "https://api.github.com/users/twaka/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Currently, pytorch_util's Conv1D always computes `normal_` to initialize weights regardless of `init_empty_weights`.
It makes model loading time of gpt2 longer.
This PR fixes this by reordering the initialization of Conv1D to apply `normal_` after assigning weight as `nn.Parameter` to avoid unnecessary initialization computation.
gpt2-xl.py
```py
from transformers import AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained(
"gpt2-xl",
torch_dtype=torch.half,
low_cpu_mem_usage=True)
```
before
```
$ python -m cProfile -s tottime gpt2-xl.py | head
1815860 function calls (1739023 primitive calls) in 25.421 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
243 19.131 0.079 19.131 0.079 {method 'normal_' of 'torch._C._TensorBase' objects}
1256 1.820 0.001 1.820 0.001 {method '_set_from_file' of 'torch._C.StorageBase' objects}
3045 1.427 0.000 1.427 0.000 {method 'to' of 'torch._C._TensorBase' objects}
2 0.677 0.339 0.677 0.339 {method 'do_handshake' of '_ssl._SSLSocket' objects}
2 0.380 0.190 0.380 0.190 {method 'read' of '_ssl._SSLSocket' objects}
```
after
```
$ python -m cProfile -s tottime gpt2-xl.py | head
1816052 function calls (1739215 primitive calls) in 5.691 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1256 1.791 0.001 1.791 0.001 {method '_set_from_file' of 'torch._C.StorageBase' objects}
3045 0.892 0.000 0.892 0.000 {method 'to' of 'torch._C._TensorBase' objects}
2 0.676 0.338 0.676 0.338 {method 'do_handshake' of '_ssl._SSLSocket' objects}
2 0.402 0.201 0.402 0.201 {method 'connect' of '_socket.socket' objects}
2 0.373 0.186 0.373 0.186 {method 'read' of '_ssl._SSLSocket' objects}
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21863
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21879/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21879",
"html_url": "https://github.com/huggingface/transformers/pull/21879",
"diff_url": "https://github.com/huggingface/transformers/pull/21879.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21879.patch",
"merged_at": 1677689961000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21878
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21878/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21878/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21878/events
|
https://github.com/huggingface/transformers/issues/21878
| 1,605,163,796
|
I_kwDOCUB6oc5frNsU
| 21,878
|
Make whisper-event checkpoints compliant to support `return_timestamp`
|
{
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Well if you are using `return_timestamps = True` you are asking for it 😅\r\nThis functionality was introduced after. Let's tell our users that they have to set it in the Generation config (when we pop). Otherwise the `generate` function should be able to set a default value if multilingual or not",
"Hey hey! - Sorry I did not do a good job at explaining the intent. For a typical developer who doesn't have any clue of how these checkpoints were fine-tuned and just wants to use a checkpoint on the hub for downstream inference only, this poses a challenge.\r\n\r\nFor them, they'd typically just take a checkpoint throw it into the pipe and expect the pipeline to do its magic - transcribe and provide the timestamps.\r\n\r\nSo my ask here is the following:\r\n1. Is there a way to make the checkpoints trained during the Whisper event compliant with the most recent changes?\r\n2. Can we add a more informative Error message so that an average developer knows what to do next?\r\n\r\nIMO point 1 is really important as our library of fine-tuned models is one of the distinguishing factors for us. It'd be less than ideal if we ask the community to have to fine-tune their checkpoints again to be able to get timestamps.\r\n\r\nHope this makes more sense!",
"For 1. I think we can open a PR on all of the whisper models that are from the event to add the required generation config WDYT? \r\n2. This can of course be done on either `generate` in whisper modelling or in the logits processor!\r\n\r\nMakes a lot of sense thanks for reporting! 👍🏻 \r\n",
"> 1. I think we can open a PR on all of the whisper models that are from the event to add the required generation config WDYT?\r\n\r\nJust to be clear, if I add the `no_timestamps_token_id` to config, it would work with timestamps with re-finetuning?",
"The model should already be able to produce timestamps without finetuning (as it is knowledge from the pretrained model) but might not be as good as the original pretrained model. \r\nYou need more than just `no_timestamps_token_id`. You have to use the `generation_config` in full that is available on the openai checkpoints. \r\nThis is required as it is a new behaviour ",
"Hey @ArthurZucker -> Can you maybe provide the steps one needs to take to make the checkpoints compatible? We can then potentially run autoPR on all the Whisper checkpoints produced during the whisper-event.",
"You can just do something like \r\n```python \r\nfrom transformers import GenerationConfig, WhisperForConditionalGeneration\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"your_pretrained_checkpoint\")\r\ngeneration_config = GenerationConfig.from_pretrained(\"openai/whisper-base\") # if you are using a multilingual model\r\nmodel.generation_config = generation_config\r\nmodel.push_to_hub(\"your_pretrained_checkpoint\", use_auth_token = \"your_token_if_not_logged_in\", create_pr = True)\r\n```\r\n",
"Would it not be easier to make changes in the codebase to make it robust to the changes we made to generate (switching to generate config and adding timestamp prediction)? What we have is currently backwards breaking 🚨 and something we want to avoid",
"That makes sense, then I'll refrain from the Auto-PR and wait for these changes to be merged into `main`. Thank you @sanchit-gandhi & @ArthurZucker <3",
"The main issue is that the `generation_config.no_timestamps_token_id` is kind of linked to the model (english or not). We are lucky that all the models are multilingual, but we can't default 2 values, and breaking changes it is, but we kind of have to. ",
"I will add it to the `config` of whisper, will be easier to deal with that!\r\n",
"Edit: I think opening PR to the relevant repositories will help (easier to generate the `generation_config`. Also this is not a problem for backward compatibility, as timestamps is a new feature, and is not part of any release yet. However #21937 is indeed a problem and will be fixed by #21965. In the mean time, will also add a warning in case `return_timestamps` is used when the generation config is not properly setup, that will refer to the solution I shared here! "
] | 1,677
| 1,678
| 1,678
|
MEMBER
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@sanchit-gandhi @ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Inferring a Whisper checkpoint fine-tuned before the `TimestampstampProcessor` was introduced into transformers returns a rather un-informed error message `AttributeError: 'GenerationConfig' object has no attribute 'no_timestamps_token_id'`
Minimum steps to reproduce this:
```python
from transformers.pipelines import AutomaticSpeechRecognitionPipeline, pipeline
from datasets import load_dataset
cv11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="test", streaming=True)
pipe = pipeline(model="sanchit-gandhi/whisper-small-hi", return_timestamps=True)
test_sample = {"raw": next(iter(cv11))["audio"]["array"],
"sampling_rate": next(iter(cv11))["audio"]["sampling_rate"]}
pipe(test_sample)
```
Colab/ notebook: [here](https://github.com/Vaibhavs10/scratchpad/blob/main/pipeline_backward_compatability_test.ipynb)
The above snippet throws an error as mentioned above. This problem effects the majority (727) of the checkpoints fine-tuned during the Whisper Event.
P.S. This has been reported by multiple community members, so not just me.
### Expected behavior
We should ideally make the `return_timestamp` functionality backwards compatible or throw a more informative error message.
Sorry if there already is a way to do this and I am just misinformed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21878/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21878/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21877
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21877/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21877/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21877/events
|
https://github.com/huggingface/transformers/issues/21877
| 1,605,144,631
|
I_kwDOCUB6oc5frJA3
| 21,877
|
__init__() got an unexpected keyword argument 'int8_threshold'
|
{
"login": "DHOFM",
"id": 27775323,
"node_id": "MDQ6VXNlcjI3Nzc1MzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/27775323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DHOFM",
"html_url": "https://github.com/DHOFM",
"followers_url": "https://api.github.com/users/DHOFM/followers",
"following_url": "https://api.github.com/users/DHOFM/following{/other_user}",
"gists_url": "https://api.github.com/users/DHOFM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DHOFM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DHOFM/subscriptions",
"organizations_url": "https://api.github.com/users/DHOFM/orgs",
"repos_url": "https://api.github.com/users/DHOFM/repos",
"events_url": "https://api.github.com/users/DHOFM/events{/privacy}",
"received_events_url": "https://api.github.com/users/DHOFM/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You need to use the `quantization_config` to set any argument relative to 8bit loading, cc @younesbelkada ",
"Thanks, I tried and get : dispatch_model() got an unexpected keyword argument 'offload_index'... I thought offloading to cpu is default false?\r\n\r\ni use\r\n`quantization_config = BitsAndBytesConfig(llm_int8_threshold=4.0)`\r\n\r\nand \r\n\r\n```\r\nmodel_8bit = AutoModelForCausalLM.from_pretrained(\r\n model_id,\r\n device_map=device_map,\r\n quantization_config=quantization_config,\r\n)\r\n```\r\n\r\ndevice_map is auto",
"You probably need an upgrade in your Accelerate lib to fix this error.",
"Mea culpa, I used one of my old Containers on my GPU Server, upgraded accelerate and it works as expected... Thanks a lot for your fast support",
"Even i upgraded the accelerate, it is throwing me the same error @DHOFM can you please tell me what version of accelerate are you using in your container? "
] | 1,677
| 1,686
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-4.15.0-177-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
also tried on Colab:
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Taken from Notebook named: HuggingFace meets bitsandbytes for lighter models on GPU for inference
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
name = "bigscience/bloom-3b"
model_8bit_thresh_4 = AutoModelForCausalLM.from_pretrained(name, device_map="auto", load_in_8bit=True, int8_threshold=4.0)
```
Error is: TypeError: __init__() got an unexpected keyword argument 'int8_threshold'
### Expected behavior
No error setting threshold
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21877/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21876
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21876/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21876/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21876/events
|
https://github.com/huggingface/transformers/pull/21876
| 1,605,140,653
|
PR_kwDOCUB6oc5LCnTJ
| 21,876
|
[WIP] Flax EfficientNet
|
{
"login": "Shubhamai",
"id": 51819922,
"node_id": "MDQ6VXNlcjUxODE5OTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/51819922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shubhamai",
"html_url": "https://github.com/Shubhamai",
"followers_url": "https://api.github.com/users/Shubhamai/followers",
"following_url": "https://api.github.com/users/Shubhamai/following{/other_user}",
"gists_url": "https://api.github.com/users/Shubhamai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shubhamai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shubhamai/subscriptions",
"organizations_url": "https://api.github.com/users/Shubhamai/orgs",
"repos_url": "https://api.github.com/users/Shubhamai/repos",
"events_url": "https://api.github.com/users/Shubhamai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shubhamai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21876). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@sanchit-gandhi was not properly tagged.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Feel free to re-open if you want to finish this one off @Shubhamai! Otherwise leaving closed for now."
] | 1,677
| 1,684
| 1,684
|
CONTRIBUTOR
| null |
# What does this PR do?
Following the PR https://github.com/huggingface/transformers/pull/21563 by [alaradirik](https://github.com/alaradirik) to add the corresponding flax model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- Flax: sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21876/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21876",
"html_url": "https://github.com/huggingface/transformers/pull/21876",
"diff_url": "https://github.com/huggingface/transformers/pull/21876.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21876.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21875
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21875/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21875/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21875/events
|
https://github.com/huggingface/transformers/issues/21875
| 1,604,999,890
|
I_kwDOCUB6oc5fqlrS
| 21,875
|
Add SpikeGPT model
|
{
"login": "gsarti",
"id": 16674069,
"node_id": "MDQ6VXNlcjE2Njc0MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/16674069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsarti",
"html_url": "https://github.com/gsarti",
"followers_url": "https://api.github.com/users/gsarti/followers",
"following_url": "https://api.github.com/users/gsarti/following{/other_user}",
"gists_url": "https://api.github.com/users/gsarti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsarti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsarti/subscriptions",
"organizations_url": "https://api.github.com/users/gsarti/orgs",
"repos_url": "https://api.github.com/users/gsarti/repos",
"events_url": "https://api.github.com/users/gsarti/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsarti/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Thanks for your interest to our work! The checkpoint weights of 120M spike GPT has available [now](https://huggingface.co/ridger/SpikeGPT-BookCorpus/blob/main/BookCorpus-SpikeGPT.pth), but just for debug and playing with the model.",
"I've read the paper, this model looks really cool 👍 "
] | 1,677
| 1,678
| null |
CONTRIBUTOR
| null |
### Model description
**Abstract:**
>As the size of large language models continue to scale, so does the computational resources required to run it. Spiking neural networks (SNNs) have emerged as an energy-efficient approach to deep learning that leverage sparse and event-driven activations to reduce the computational overhead associated with model inference. While they have become competitive with non-spiking models on many computer vision tasks, SNNs have also proven to be more challenging to train. As a result, their performance lags behind modern deep learning, and we are yet to see the effectiveness of SNNs in language generation. In this paper, inspired by the RWKV language model, we successfully implement `SpikeGPT', a generative language model with pure binary, event-driven spiking activation units. We train the proposed model on three model variants: 45M, 125M and 260M parameters. To the best of our knowledge, this is 4x larger than any functional backprop-trained SNN to date. We achieve this by modifying the transformer block to replace multi-head self attention to reduce quadratic computational complexity to linear with increasing sequence length. Input tokens are instead streamed in sequentially to our attention mechanism (as with typical SNNs). Our preliminary experiments show that SpikeGPT remains competitive with non-spiking models on tested benchmarks, while maintaining 5x less energy consumption when processed on neuromorphic hardware that can leverage sparse, event-driven activations.
Concretely, it is a GPT model using Receptance Weighted Key Value (RWKV) instead of regular attention, and an adapted FFN layer.
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
[Paper](https://arxiv.org/abs/2302.13939) | [Code](https://github.com/ridgerchu/SpikeGPT)
Author: @ridgerchu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21875/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21874
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21874/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21874/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21874/events
|
https://github.com/huggingface/transformers/pull/21874
| 1,604,960,569
|
PR_kwDOCUB6oc5LB_4O
| 21,874
|
fix checkpoint
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Uses the correct checkpoints for doctests
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21874/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21874",
"html_url": "https://github.com/huggingface/transformers/pull/21874",
"diff_url": "https://github.com/huggingface/transformers/pull/21874.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21874.patch",
"merged_at": 1677743241000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21873
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21873/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21873/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21873/events
|
https://github.com/huggingface/transformers/pull/21873
| 1,604,921,753
|
PR_kwDOCUB6oc5LB3VY
| 21,873
|
Add TFVisionTextDualEncoder
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Good spot, I've added your suggestions and I'll add the modeling file to the documentation check list!",
"The failing test is unrelated (OPT generation), merging!"
] | 1,677
| 1,677
| 1,677
|
MEMBER
| null |
This PR uses the new weight crossloading functions to add the missing `TFVisionTextDualEncoder` class.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21873/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21873",
"html_url": "https://github.com/huggingface/transformers/pull/21873",
"diff_url": "https://github.com/huggingface/transformers/pull/21873.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21873.patch",
"merged_at": 1677693648000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21872
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21872/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21872/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21872/events
|
https://github.com/huggingface/transformers/pull/21872
| 1,604,894,350
|
PR_kwDOCUB6oc5LBxhn
| 21,872
|
Removed BLIP mention from the troubleshooting guide
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
Now that BLIP has an AutoModel mapping, (see https://github.com/huggingface/transformers/pull/21817), this PR removes mention of BLIP's edge case from the troubleshooting guide.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21872/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21872",
"html_url": "https://github.com/huggingface/transformers/pull/21872",
"diff_url": "https://github.com/huggingface/transformers/pull/21872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21872.patch",
"merged_at": 1677677186000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21871
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21871/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21871/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21871/events
|
https://github.com/huggingface/transformers/pull/21871
| 1,604,815,671
|
PR_kwDOCUB6oc5LBgJ2
| 21,871
|
Italian translation of community.mdx
|
{
"login": "lorenzobalzani",
"id": 45718582,
"node_id": "MDQ6VXNlcjQ1NzE4NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/45718582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorenzobalzani",
"html_url": "https://github.com/lorenzobalzani",
"followers_url": "https://api.github.com/users/lorenzobalzani/followers",
"following_url": "https://api.github.com/users/lorenzobalzani/following{/other_user}",
"gists_url": "https://api.github.com/users/lorenzobalzani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorenzobalzani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorenzobalzani/subscriptions",
"organizations_url": "https://api.github.com/users/lorenzobalzani/orgs",
"repos_url": "https://api.github.com/users/lorenzobalzani/repos",
"events_url": "https://api.github.com/users/lorenzobalzani/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorenzobalzani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @lorenzobalzani , I'll review it in the next few days"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Italian translation of community.mdx
See issue: https://github.com/huggingface/transformers/issues/17459
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @nickprock
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21871/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21871",
"html_url": "https://github.com/huggingface/transformers/pull/21871",
"diff_url": "https://github.com/huggingface/transformers/pull/21871.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21871.patch",
"merged_at": 1677674996000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21870
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21870/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21870/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21870/events
|
https://github.com/huggingface/transformers/pull/21870
| 1,604,698,828
|
PR_kwDOCUB6oc5LBGgx
| 21,870
|
Prophetnet batch dimension inversion fix
|
{
"login": "kiansierra",
"id": 47116198,
"node_id": "MDQ6VXNlcjQ3MTE2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/47116198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiansierra",
"html_url": "https://github.com/kiansierra",
"followers_url": "https://api.github.com/users/kiansierra/followers",
"following_url": "https://api.github.com/users/kiansierra/following{/other_user}",
"gists_url": "https://api.github.com/users/kiansierra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiansierra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiansierra/subscriptions",
"organizations_url": "https://api.github.com/users/kiansierra/orgs",
"repos_url": "https://api.github.com/users/kiansierra/repos",
"events_url": "https://api.github.com/users/kiansierra/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiansierra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @younesbelkada and @ArthurZucker ",
"Hi @ArthurZucker, thanks for the feedback.\r\nI've implemented the suggestions you mentioned, adding full text notation of expected tensor dimensions and separating tenosr operations in two multiple lines instead of chaining where requested",
"Thanks for the kind feedback @ArthurZucker.\r\nJust some clarity before merging, I should update the integration tests as described by the attached colab, the current version generates different text based on the other elements in the batch. While new version returns the same output as if generated with a batch size of 1",
"I've now updated the integration tests, they should pass now"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17455
No longer mixes batch dimension with other dimensions, makes it so that inverting the inputs along the batch dimension also inverts the outputs, and returns the same loss indepent of the batch order.
Currently all tests pass (locally) except integration tests, which I believe are due to the issue at hand as can be seen in this example. Essentially the integration test for generation, returns different outputs based on what other elemnts are in the batch, with this fix it returns the same output as with a batch of 1
[](https://colab.research.google.com/drive/12EAAbXZSemzvuoz5g_3WAe0sH1YAZUwk?usp=sharing)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@patrickvonplaten @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21870/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21870",
"html_url": "https://github.com/huggingface/transformers/pull/21870",
"diff_url": "https://github.com/huggingface/transformers/pull/21870.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21870.patch",
"merged_at": 1677776865000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21869
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21869/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21869/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21869/events
|
https://github.com/huggingface/transformers/pull/21869
| 1,604,651,039
|
PR_kwDOCUB6oc5LA8Iv
| 21,869
|
[GPT-J] add deprecation warning
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Deprecating `position_ids` in GPTJ
Fixes #21114
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21869/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21869",
"html_url": "https://github.com/huggingface/transformers/pull/21869",
"diff_url": "https://github.com/huggingface/transformers/pull/21869.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21869.patch",
"merged_at": 1677765120000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21868
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21868/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21868/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21868/events
|
https://github.com/huggingface/transformers/pull/21868
| 1,604,595,534
|
PR_kwDOCUB6oc5LAwDB
| 21,868
|
[`Blip`] Fix blip doctest
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the Blip doctest that was failing with the changes proposed in https://github.com/huggingface/transformers/pull/21811
Link to failing job: https://github.com/huggingface/transformers/actions/runs/4299412591/jobs/7494589393
## Why this fix is relevant?
In #21811 the logic of `BlipForConditionalGeneration` forward pass has changed. If a user wants to use this as a standalone class and call `forward`, the text input must be fed to the model to the text decoder to mimic the implementations of encoder-decoder architectures in `transformers`, check for instance what is done to properly call `forward` on `T5`: https://github.com/huggingface/transformers/blob/b29e2dcaff114762e65eaea739ba1076fc5d1c84/src/transformers/models/t5/modeling_t5.py#L1641
Hence, the fix of the doctest should be to feed a text input to the decoder by adding a text argument to the processor.
cc @ydshieh @sgugger 💯
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21868/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21868",
"html_url": "https://github.com/huggingface/transformers/pull/21868",
"diff_url": "https://github.com/huggingface/transformers/pull/21868.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21868.patch",
"merged_at": 1677675954000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21867
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21867/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21867/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21867/events
|
https://github.com/huggingface/transformers/pull/21867
| 1,604,544,970
|
PR_kwDOCUB6oc5LAlFF
| 21,867
|
Flax Regnet
|
{
"login": "Shubhamai",
"id": 51819922,
"node_id": "MDQ6VXNlcjUxODE5OTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/51819922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shubhamai",
"html_url": "https://github.com/Shubhamai",
"followers_url": "https://api.github.com/users/Shubhamai/followers",
"following_url": "https://api.github.com/users/Shubhamai/following{/other_user}",
"gists_url": "https://api.github.com/users/Shubhamai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shubhamai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shubhamai/subscriptions",
"organizations_url": "https://api.github.com/users/Shubhamai/orgs",
"repos_url": "https://api.github.com/users/Shubhamai/repos",
"events_url": "https://api.github.com/users/Shubhamai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shubhamai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sanchit-gandhi This is now ready for your review, thanks a lot for your time.",
"@sanchit-gandhi All the requested changes have been made and looks ready for next iteration of review, thanks a lot for your time."
] | 1,677
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
# What does this PR do?
Flax Implementation of [facebook/regnet-y-040](https://huggingface.co/facebook/regnet-y-040)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- Flax: sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21867/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21867",
"html_url": "https://github.com/huggingface/transformers/pull/21867",
"diff_url": "https://github.com/huggingface/transformers/pull/21867.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21867.patch",
"merged_at": 1680626473000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21866
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21866/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21866/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21866/events
|
https://github.com/huggingface/transformers/pull/21866
| 1,604,475,531
|
PR_kwDOCUB6oc5LAWT-
| 21,866
|
Fix gradient checkpointing bug Bart
|
{
"login": "saswatmeher",
"id": 35535056,
"node_id": "MDQ6VXNlcjM1NTM1MDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/35535056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saswatmeher",
"html_url": "https://github.com/saswatmeher",
"followers_url": "https://api.github.com/users/saswatmeher/followers",
"following_url": "https://api.github.com/users/saswatmeher/following{/other_user}",
"gists_url": "https://api.github.com/users/saswatmeher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saswatmeher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saswatmeher/subscriptions",
"organizations_url": "https://api.github.com/users/saswatmeher/orgs",
"repos_url": "https://api.github.com/users/saswatmeher/repos",
"events_url": "https://api.github.com/users/saswatmeher/events{/privacy}",
"received_events_url": "https://api.github.com/users/saswatmeher/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for your great work! \r\nCan you please run `make fix-copies` ? After that we should be good to merge"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes issue #21737 for Bart.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (#21737)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @younesbelkada, @gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21866/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21866",
"html_url": "https://github.com/huggingface/transformers/pull/21866",
"diff_url": "https://github.com/huggingface/transformers/pull/21866.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21866.patch",
"merged_at": 1677670918000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21865
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21865/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21865/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21865/events
|
https://github.com/huggingface/transformers/issues/21865
| 1,604,205,888
|
I_kwDOCUB6oc5fnj1A
| 21,865
|
Running summarization with default model fails. 4.27.0.dev0
|
{
"login": "jojivk73",
"id": 14943401,
"node_id": "MDQ6VXNlcjE0OTQzNDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/14943401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jojivk73",
"html_url": "https://github.com/jojivk73",
"followers_url": "https://api.github.com/users/jojivk73/followers",
"following_url": "https://api.github.com/users/jojivk73/following{/other_user}",
"gists_url": "https://api.github.com/users/jojivk73/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jojivk73/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jojivk73/subscriptions",
"organizations_url": "https://api.github.com/users/jojivk73/orgs",
"repos_url": "https://api.github.com/users/jojivk73/repos",
"events_url": "https://api.github.com/users/jojivk73/events{/privacy}",
"received_events_url": "https://api.github.com/users/jojivk73/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 ",
"Hi @jojivk73, thanks for the bug report! We've reproduced the issue - the cause is that the `transformers` library is currently transitioning to using standardized native Keras layers for as many purposes as possible, and deprecating the previous setup where we often had ad-hoc model-specific solutions. \r\n\r\nOne consequence of the transition is that BART's embeddings used to store their weights in `embeddings.weight`, but now that they've been swapped to a Keras `Embedding` layer, the weights are in `embeddings.embeddings`. We missed this issue in the example code during the transition, but we're preparing a PR to fix it immediately. I'll ping you as soon as it's ready.",
"@jojivk73 the PR is now up at #21881",
"@jojivk73 PR is merged. Please install the latest version from `main` and let me know if you have any other problems, and thanks again for the bug report!"
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
### System Info
When runining examples/tensorflow/summarization/run_summarization.py
as given in README,
python run_summarization.py \
--model_name_or_path facebook/bart-base \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
it fails as below.

### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Copy script in README to a file
2. Run the script
### Expected behavior
Should run without issues as it is example given in README
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21865/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21864
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21864/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21864/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21864/events
|
https://github.com/huggingface/transformers/issues/21864
| 1,604,131,492
|
I_kwDOCUB6oc5fnRqk
| 21,864
|
This line prevent us from using "std" scaling any more.
|
{
"login": "zhentao-xu",
"id": 126112554,
"node_id": "U_kgDOB4RTKg",
"avatar_url": "https://avatars.githubusercontent.com/u/126112554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhentao-xu",
"html_url": "https://github.com/zhentao-xu",
"followers_url": "https://api.github.com/users/zhentao-xu/followers",
"following_url": "https://api.github.com/users/zhentao-xu/following{/other_user}",
"gists_url": "https://api.github.com/users/zhentao-xu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhentao-xu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhentao-xu/subscriptions",
"organizations_url": "https://api.github.com/users/zhentao-xu/orgs",
"repos_url": "https://api.github.com/users/zhentao-xu/repos",
"events_url": "https://api.github.com/users/zhentao-xu/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhentao-xu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nStd scaling wasn't supported until #21020 was merged (only mean scaling is currently supported on the latest PyPi install). So if you install Transformers from source, you can use std scaling.\r\n\r\nCc @kashif ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@zhentaoxuttup were you able to use \"std\" scaling?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,683
| 1,683
|
NONE
| null |
https://github.com/huggingface/transformers/blob/b29e2dcaff114762e65eaea739ba1076fc5d1c84/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py#L1549
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21864/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21863
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21863/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21863/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21863/events
|
https://github.com/huggingface/transformers/issues/21863
| 1,604,109,370
|
I_kwDOCUB6oc5fnMQ6
| 21,863
|
Initialization of pytorch_util's Conv1D takes long time regardless of init_empty_weights when loading pretrained gpt2
|
{
"login": "twaka",
"id": 8081197,
"node_id": "MDQ6VXNlcjgwODExOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8081197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/twaka",
"html_url": "https://github.com/twaka",
"followers_url": "https://api.github.com/users/twaka/followers",
"following_url": "https://api.github.com/users/twaka/following{/other_user}",
"gists_url": "https://api.github.com/users/twaka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/twaka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/twaka/subscriptions",
"organizations_url": "https://api.github.com/users/twaka/orgs",
"repos_url": "https://api.github.com/users/twaka/repos",
"events_url": "https://api.github.com/users/twaka/events{/privacy}",
"received_events_url": "https://api.github.com/users/twaka/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It would be easier to move the initialization after initializing the parameter (so doing `self.weight = nn.Parameter(torch.empty(nx, nf))` and then apply the init normal. Would you like to make a PR with this change?\r\n\r\nEven better, the initialization should be completely left to the `_init_weights` method of the PreTrainedModel using Conv1D and not present in this class at all, but it is a bit more work.",
"Thank you for your suggestion for reordering of initialization.\r\nIt makes sense to me. I'll make a PR soon."
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.1
- `accelerate` version: 0.16.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
gpt2-xl.py
```py
from transformers import AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained(
"gpt2-xl",
torch_dtype=torch.half,
low_cpu_mem_usage=True)
```
```
$ python -m cProfile -s tottime gpt2-xl.py | head
1264809 function calls (1209396 primitive calls) in 24.274 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
194 19.016 0.098 19.016 0.098 {method 'normal_' of 'torch._C._TensorBase' objects}
1256 2.083 0.002 2.083 0.002 {method '_set_from_file' of 'torch._C.StorageBase' objects}
2 0.684 0.342 0.684 0.342 {method 'do_handshake' of '_ssl._SSLSocket' objects}
2 0.355 0.178 0.355 0.178 {method 'read' of '_ssl._SSLSocket' objects}
```
https://github.com/huggingface/transformers/blob/b29e2dcaff114762e65eaea739ba1076fc5d1c84/src/transformers/pytorch_utils.py#L105-L110
`w` is constructed in `device: cpu` and actually computes `normal_`.
It's problematic when loading pretrained gpt2 models with larger number of parameters.
### Expected behavior
https://github.com/huggingface/transformers/blob/b29e2dcaff114762e65eaea739ba1076fc5d1c84/src/transformers/modeling_utils.py#L2491-L2492
- by change this line to `init_contexts.append(init_empty_weights(include_buffers=True))`
- `w` will be constructed in `device: meta` according to https://github.com/huggingface/accelerate/pull/699
- as a result, actual computation of `normal_` will be skipped and faster model loading time
```
$ python -m cProfile -s time gpt2-xl.py | head
1265651 function calls (1210238 primitive calls) in 4.692 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1256 1.766 0.001 1.766 0.001 {method '_set_from_file' of 'torch._C.StorageBase' objects}
2 0.684 0.342 0.684 0.342 {method 'do_handshake' of '_ssl._SSLSocket' objects}
2 0.357 0.178 0.357 0.178 {method 'read' of '_ssl._SSLSocket' objects}
2 0.342 0.171 0.342 0.171 {method 'connect' of '_socket.socket' objects}
```
Though, I don't know whether it's safe to set `include_buffers=True` for all models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21863/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21862
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21862/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21862/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21862/events
|
https://github.com/huggingface/transformers/issues/21862
| 1,604,108,093
|
I_kwDOCUB6oc5fnL89
| 21,862
|
Very slow process when `torch_dtype` is passed.
|
{
"login": "realSAH",
"id": 98207838,
"node_id": "U_kgDOBdqIXg",
"avatar_url": "https://avatars.githubusercontent.com/u/98207838?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/realSAH",
"html_url": "https://github.com/realSAH",
"followers_url": "https://api.github.com/users/realSAH/followers",
"following_url": "https://api.github.com/users/realSAH/following{/other_user}",
"gists_url": "https://api.github.com/users/realSAH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/realSAH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/realSAH/subscriptions",
"organizations_url": "https://api.github.com/users/realSAH/orgs",
"repos_url": "https://api.github.com/users/realSAH/repos",
"events_url": "https://api.github.com/users/realSAH/events{/privacy}",
"received_events_url": "https://api.github.com/users/realSAH/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"There is nothing we can do without a clear reproducer.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
NONE
| null |
When I use `from_pretrained`, the model loads up from my azure cache at almost 1gbps, but when I specify the `torch_dtype` the process is slowed to a crawling 1 tenth of original speed. When looking at resources it appears that its a single core type of process that is doing the disservice of being bottleneck.
Can we have that parallelized?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21862/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21861
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21861/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21861/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21861/events
|
https://github.com/huggingface/transformers/pull/21861
| 1,604,072,550
|
PR_kwDOCUB6oc5K--wZ
| 21,861
|
Make ZeroShotImageClassificationPipeline faster
|
{
"login": "yessenzhar",
"id": 8552242,
"node_id": "MDQ6VXNlcjg1NTIyNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8552242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yessenzhar",
"html_url": "https://github.com/yessenzhar",
"followers_url": "https://api.github.com/users/yessenzhar/followers",
"following_url": "https://api.github.com/users/yessenzhar/following{/other_user}",
"gists_url": "https://api.github.com/users/yessenzhar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yessenzhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yessenzhar/subscriptions",
"organizations_url": "https://api.github.com/users/yessenzhar/orgs",
"repos_url": "https://api.github.com/users/yessenzhar/repos",
"events_url": "https://api.github.com/users/yessenzhar/events{/privacy}",
"received_events_url": "https://api.github.com/users/yessenzhar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Have you found `batch_size` argument which should take care of that ?\r\n\r\nIt's even better since with `batch_size` you can adjust the size of the batch independantly of the number of candidate labels, which makes it easier to adapt relative to hardware/model size.\r\n\r\nAnd you can have batch_size=1000 with only 3 candidate labels, they really hare independant.",
"We tested batch_size argument, it doesn't work as expected and take long time and a lot of memory. \r\nIn the pipeline `batch_size` separates candidate labels and runs one preprocess for each image/candidate_label pair. \r\nWe expect batching happening for images and all candidate_labels for each image.\r\n```\r\npipe = transformers.pipeline(\r\n task='zero-shot-image-classification',\r\n model='openai/clip-vit-large-patch14-336',\r\n framework='pt',\r\n device=\"cuda:0\"\r\n)\r\n\r\nwith open('labels.json', 'r') as f:\r\n l = f.read()\r\nlabels = json.loads(l)\r\n\r\nres = pipe(images=['/home/user/cat_dog.jpg'], candidate_labels=labels[:250], batch_size=250)\r\n```\r\nIt produces this:\r\n```\r\nOutOfMemoryError: CUDA out of memory. Tried to allocate 4.96 GiB (GPU 0; 10.76 GiB total capacity; 4.66 \r\nGiB already allocated; 4.98 GiB free; 4.71 GiB reserved in total by PyTorch) If reserved memory is >> \r\nallocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory \r\nManagement and PYTORCH_CUDA_ALLOC_CONF\r\n```\r\n\r\nRunning with this using main branch transformers took 56 seconds vs 2 seconds on fast-zero-shot-image.\r\n```\r\nres = pipe(images=['/home/user/cat_dog.jpg'], candidate_labels=labels[:1000], batch_size=100)\r\n```\r\n\r\n\r\n\r\n\r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"Ohh now I remember. \r\n\r\nI dove a bit deeper into the code.\r\n\r\nThe issue lies within `CLIP` itself and how it's working. There's essentially 2 batch sizes the image batch size, and text batch size.\r\nAnd CLIP is returning an object being TEXT_BS * IMAGE_BS. Which means in this case we're doing a cross product of what we really need.\r\n\r\nIn addition to that, the current pipeline does batch the same image over and over (meaning it's going to pass several times in the vision encoder.\r\n\r\nWhat we want in an ideal world, would be that the images get batched on their own, and get their representation encoded, and independently so do the `candidate_labels` (since we're also calculating them way too many times currently, once for each image in the pipeline.).\r\n\r\nWe **need** to keep `batch_size` functioning, which this PR currently silently breaks.\r\n\r\nNow since this pipeline is only implemented for CLIP as of now, I think we can clean this up by breaking up the CLIP model into pieces. I'll try to figure out another solution.",
"I have created another PR with you as co-author to try and find a fix which could keep the performance you get here (potentially a bit better since I calculate candidate labels only once).\r\n\r\nWould the other approach work for you ?\r\n\r\nhttps://github.com/huggingface/transformers/pull/21897 ",
"Closing in favor of #21897 "
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
The pipeline makes separate calls to model for each candidate label. This commit combines all labels into one call.
Original code takes more that 60 seconds to process one image and 1000 candidate labels. Updated code takes less than 2 seconds.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Who can review?
Library:
- pipelines: @Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21861/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21861",
"html_url": "https://github.com/huggingface/transformers/pull/21861",
"diff_url": "https://github.com/huggingface/transformers/pull/21861.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21861.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21860
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21860/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21860/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21860/events
|
https://github.com/huggingface/transformers/pull/21860
| 1,603,994,673
|
PR_kwDOCUB6oc5K-uWX
| 21,860
|
Change the way tensor is reshaped in BartAttention (from .view to .reshape)
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@younesbelkada For some reason fix-copies is not fixing the prophetnet copy, Not sure how to fix this.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @raghavanone !\r\nHum very weird, can you try to merge this branch with the `main` branch of `transformers` and see if this fixes the issue?",
"> Hi @raghavanone ! Hum very weird, can you try to merge this branch with the `main` branch of `transformers` and see if this fixes the issue?\r\n\r\nIndeed weird, It is already on top of the latest main. Stranger thing is both of these checks pass on my machine.",
"Can you try `pip install --upgrade -e .[\"quality\"]` + `make fixup` + `make fix-copies` ?"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #21813
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21860/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21860",
"html_url": "https://github.com/huggingface/transformers/pull/21860",
"diff_url": "https://github.com/huggingface/transformers/pull/21860.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21860.patch",
"merged_at": 1677674838000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21859
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21859/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21859/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21859/events
|
https://github.com/huggingface/transformers/pull/21859
| 1,603,896,475
|
PR_kwDOCUB6oc5K-ZL1
| 21,859
|
[doc] deepspeed tests
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
added instructions on how to run deepspeed tests for deepspeed PR contributors.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21859/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21859",
"html_url": "https://github.com/huggingface/transformers/pull/21859",
"diff_url": "https://github.com/huggingface/transformers/pull/21859.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21859.patch",
"merged_at": 1677689570000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21858
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21858/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21858/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21858/events
|
https://github.com/huggingface/transformers/issues/21858
| 1,603,876,368
|
I_kwDOCUB6oc5fmTYQ
| 21,858
|
cannot import name 'COMMON_SAFE_ASCII_CHARACTERS'
|
{
"login": "dickreuter",
"id": 1256318,
"node_id": "MDQ6VXNlcjEyNTYzMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1256318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dickreuter",
"html_url": "https://github.com/dickreuter",
"followers_url": "https://api.github.com/users/dickreuter/followers",
"following_url": "https://api.github.com/users/dickreuter/following{/other_user}",
"gists_url": "https://api.github.com/users/dickreuter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dickreuter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dickreuter/subscriptions",
"organizations_url": "https://api.github.com/users/dickreuter/orgs",
"repos_url": "https://api.github.com/users/dickreuter/repos",
"events_url": "https://api.github.com/users/dickreuter/events{/privacy}",
"received_events_url": "https://api.github.com/users/dickreuter/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please run `transformers-cli env` and paste the results here, as requested in the issue template. In particular, do you have the tokenizers module installed and which version?",
"### System Info\r\nmacbook air m2 with anaconda, python 3.9\r\n\r\nI got a similar bug :bug:\r\n`ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant'`\r\n\r\nWhen I encountered this I used:\r\n```\r\npip install chardet\r\n```",
"> ### System Info\r\n> macbook m2 with anaconda, python 3.9\r\n> \r\n> I got a similar bug 🐛 `ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant'`\r\n> \r\n> When I encountered this I used:\r\n> \r\n> ```\r\n> pip install chardet\r\n> ```\r\n\r\nEncountered the same error message when importing transformers. Installing chardet solved the issue.\r\n\r\nOutput for transformers-cli env\r\n\r\n- `transformers` version: 4.28.0.dev0\r\n- Platform: Linux-4.18.0-425.13.1.el8_7.x86_64-x86_64-with-glibc2.28\r\n- Python version: 3.9.16\r\n- Huggingface_hub version: 0.13.2\r\n- PyTorch version (GPU?): 1.13.1 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: no\r\n- Using distributed or parallel set-up in script?: no",
"Could you please provide us with the full traceback? To potentially fix this, we need to know which module raises the error and neither Transformers nor Tokenizers import anything from charset directly.",
"```code\r\nTraceback (most recent call last):\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py\", line 11, in <module>\r\n import chardet\r\nModuleNotFoundError: No module named 'chardet'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/__init__.py\", line 26, in <module>\r\n from . import dependency_versions_check\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/dependency_versions_check.py\", line 17, in <module>\r\n from .utils.versions import require_version, require_version_core\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/__init__.py\", line 30, in <module>\r\n from .generic import (\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/generic.py\", line 29, in <module>\r\n from .import_utils import is_flax_available, is_tf_available, is_torch_available, is_torch_fx_proxy\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/import_utils.py\", line 32, in <module>\r\n from . import logging\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/logging.py\", line 35, in <module>\r\n import huggingface_hub.utils as hf_hub_utils\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/huggingface_hub/utils/__init__.py\", line 32, in <module>\r\n from ._errors import (\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py\", line 3, in <module>\r\n from requests import HTTPError, Response\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/__init__.py\", line 45, in <module>\r\n from .exceptions import RequestsDependencyWarning\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/exceptions.py\", line 9, in <module>\r\n from .compat import JSONDecodeError as CompatJSONDecodeError\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py\", line 13, in <module>\r\n import charset_normalizer as chardet\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/__init__.py\", line 23, in <module>\r\n from charset_normalizer.api import from_fp, from_path, from_bytes, normalize\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/api.py\", line 10, in <module>\r\n from charset_normalizer.md import mess_ratio\r\n File \"charset_normalizer/md.py\", line 5, in <module>\r\nImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/constant.py)\r\n```",
"Ok so this looks like it stems from the `requests` so it may be worth raising the issue there (it looks like `from requests import HTTPError, Response` fails in your env, if you want a minimal reproducer).\r\n\r\n@Wauplin We might also need to put something in the dependencies of `huggingface_hub` to have the `chardet` dep installed on MacOS?",
"@sgugger I'm not against adding the dependency but as you said, it really seems to be an issue on `requests` side that has nothing to do with `huggingface_hub`/`transformers` (except the fact we use `requests`). I would first try to:\r\n\r\n1. isolate a minimal reproducible code. Maybe just 1 line is enough:\r\n```py\r\nfrom requests import HTTPError\r\n# or\r\nfrom requests import Response\r\n```\r\n\r\n2. list all installed deps (in particalar, `requests`, `charset` and `charset-normalizer`) + python version + os\r\n3. open an issue on https://github.com/psf/requests\r\n4. once that's done, open an issue in [huggingface_hub](https://github.com/huggingface/huggingface_hub) and decide what's the best solution (add chardet as deps for macos for example?)",
"Yes, we can try to have it solve in requests first indeed. It's if that takes too much time or is not deemed important we should fix it in hf hub.\r\n\r\n@ani0075saha Could you try the two lines given by Wauplin and do step 2 and 3?",
"1.\r\n\r\n```code\r\n>>> from requests import HTTPError\r\nTraceback (most recent call last):\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py\", line 11, in <module>\r\n import chardet\r\nModuleNotFoundError: No module named 'chardet'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/__init__.py\", line 45, in <module>\r\n from .exceptions import RequestsDependencyWarning\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/exceptions.py\", line 9, in <module>\r\n from .compat import JSONDecodeError as CompatJSONDecodeError\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py\", line 13, in <module>\r\n import charset_normalizer as chardet\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/__init__.py\", line 23, in <module>\r\n from charset_normalizer.api import from_fp, from_path, from_bytes, normalize\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/api.py\", line 10, in <module>\r\n from charset_normalizer.md import mess_ratio\r\n File \"charset_normalizer/md.py\", line 5, in <module>\r\nImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/constant.py)\r\n```\r\n```code\r\n>>> from requests import Response\r\nTraceback (most recent call last):\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py\", line 11, in <module>\r\n import chardet\r\nModuleNotFoundError: No module named 'chardet'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/__init__.py\", line 45, in <module>\r\n from .exceptions import RequestsDependencyWarning\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/exceptions.py\", line 9, in <module>\r\n from .compat import JSONDecodeError as CompatJSONDecodeError\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py\", line 13, in <module>\r\n import charset_normalizer as chardet\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/__init__.py\", line 23, in <module>\r\n from charset_normalizer.api import from_fp, from_path, from_bytes, normalize\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/api.py\", line 10, in <module>\r\n from charset_normalizer.md import mess_ratio\r\nAttributeError: partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import)\r\n```\r\n2.\r\n\r\n```code\r\n(huggingface-bug-test) anisaha1:~$ conda list \r\n# packages in environment at /<redacted>/anaconda3/envs/huggingface-bug-test:\r\n#\r\n# Name Version Build Channel\r\n_libgcc_mutex 0.1 main \r\n_openmp_mutex 5.1 1_gnu \r\nblas 1.0 mkl \r\nbrotlipy 0.7.0 py39h27cfd23_1003 \r\nbzip2 1.0.8 h7b6447c_0 \r\nca-certificates 2023.01.10 h06a4308_0 \r\ncertifi 2022.12.7 py39h06a4308_0 \r\ncffi 1.15.1 py39h5eee18b_3 \r\ncharset-normalizer 3.1.0 pypi_0 pypi\r\ncryptography 39.0.1 py39h9ce1e76_0 \r\ncuda-cudart 11.7.99 0 nvidia\r\ncuda-cupti 11.7.101 0 nvidia\r\ncuda-libraries 11.7.1 0 nvidia\r\ncuda-nvrtc 11.7.99 0 nvidia\r\ncuda-nvtx 11.7.91 0 nvidia\r\ncuda-runtime 11.7.1 0 nvidia\r\nffmpeg 4.3 hf484d3e_0 pytorch\r\nfilelock 3.10.0 pypi_0 pypi\r\nflit-core 3.6.0 pyhd3eb1b0_0 \r\nfreetype 2.12.1 h4a9f257_0 \r\ngiflib 5.2.1 h5eee18b_3 \r\ngmp 6.2.1 h295c915_3 \r\ngnutls 3.6.15 he1e5248_0 \r\nhuggingface-hub 0.13.3 pypi_0 pypi\r\nidna 3.4 py39h06a4308_0 \r\nintel-openmp 2021.4.0 h06a4308_3561 \r\njpeg 9e h5eee18b_1 \r\nlame 3.100 h7b6447c_0 \r\nlcms2 2.12 h3be6417_0 \r\nld_impl_linux-64 2.38 h1181459_1 \r\nlerc 3.0 h295c915_0 \r\nlibcublas 11.10.3.66 0 nvidia\r\nlibcufft 10.7.2.124 h4fbf590_0 nvidia\r\nlibcufile 1.6.0.25 0 nvidia\r\nlibcurand 10.3.2.56 0 nvidia\r\nlibcusolver 11.4.0.1 0 nvidia\r\nlibcusparse 11.7.4.91 0 nvidia\r\nlibdeflate 1.17 h5eee18b_0 \r\nlibffi 3.4.2 h6a678d5_6 \r\nlibgcc-ng 11.2.0 h1234567_1 \r\nlibgomp 11.2.0 h1234567_1 \r\nlibiconv 1.16 h7f8727e_2 \r\nlibidn2 2.3.2 h7f8727e_0 \r\nlibnpp 11.7.4.75 0 nvidia\r\nlibnvjpeg 11.8.0.2 0 nvidia\r\nlibpng 1.6.39 h5eee18b_0 \r\nlibstdcxx-ng 11.2.0 h1234567_1 \r\nlibtasn1 4.16.0 h27cfd23_0 \r\nlibtiff 4.5.0 h6a678d5_2 \r\nlibunistring 0.9.10 h27cfd23_0 \r\nlibwebp 1.2.4 h11a3e52_1 \r\nlibwebp-base 1.2.4 h5eee18b_1 \r\nlz4-c 1.9.4 h6a678d5_0 \r\nmkl 2021.4.0 h06a4308_640 \r\nmkl-service 2.4.0 py39h7f8727e_0 \r\nmkl_fft 1.3.1 py39hd3c417c_0 \r\nmkl_random 1.2.2 py39h51133e4_0 \r\nncurses 6.4 h6a678d5_0 \r\nnettle 3.7.3 hbbd107a_1 \r\nnumpy 1.24.2 pypi_0 pypi\r\nnumpy-base 1.23.5 py39h31eccc5_0 \r\nopenh264 2.1.1 h4ff587b_0 \r\nopenssl 1.1.1t h7f8727e_0 \r\npackaging 23.0 pypi_0 pypi\r\npillow 9.4.0 py39h6a678d5_0 \r\npip 23.0.1 py39h06a4308_0 \r\npycparser 2.21 pyhd3eb1b0_0 \r\npyopenssl 23.0.0 py39h06a4308_0 \r\npysocks 1.7.1 py39h06a4308_0 \r\npython 3.9.16 h7a1cb2a_2 \r\npytorch 1.13.1 py3.9_cuda11.7_cudnn8.5.0_0 pytorch\r\npytorch-cuda 11.7 h778d358_3 pytorch\r\npytorch-mutex 1.0 cuda pytorch\r\npyyaml 6.0 pypi_0 pypi\r\nreadline 8.2 h5eee18b_0 \r\nregex 2022.10.31 pypi_0 pypi\r\nrequests 2.28.2 pypi_0 pypi\r\nsetuptools 65.6.3 py39h06a4308_0 \r\nsix 1.16.0 pyhd3eb1b0_1 \r\nsqlite 3.41.1 h5eee18b_0 \r\ntk 8.6.12 h1ccaba5_0 \r\ntokenizers 0.13.2 pypi_0 pypi\r\ntorchaudio 0.13.1 py39_cu117 pytorch\r\ntorchvision 0.14.1 py39_cu117 pytorch\r\ntqdm 4.65.0 pypi_0 pypi\r\ntransformers 4.28.0.dev0 pypi_0 pypi\r\ntyping-extensions 4.5.0 pypi_0 pypi\r\ntyping_extensions 4.4.0 py39h06a4308_0 \r\ntzdata 2022g h04d1e81_0 \r\nurllib3 1.26.15 pypi_0 pypi\r\nwheel 0.38.4 py39h06a4308_0 \r\nxz 5.2.10 h5eee18b_1 \r\nzlib 1.2.13 h5eee18b_0 \r\nzstd 1.5.2 ha4553b6_0 \r\n```\r\n\r\n```code\r\n(huggingface-bug-test) anisaha1:~$ transformers-cli env\r\nTraceback (most recent call last):\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py\", line 11, in <module>\r\n import chardet\r\nModuleNotFoundError: No module named 'chardet'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/bin/transformers-cli\", line 5, in <module>\r\n from transformers.commands.transformers_cli import main\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/__init__.py\", line 26, in <module>\r\n from . import dependency_versions_check\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/dependency_versions_check.py\", line 17, in <module>\r\n from .utils.versions import require_version, require_version_core\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/__init__.py\", line 30, in <module>\r\n from .generic import (\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/generic.py\", line 29, in <module>\r\n from .import_utils import is_flax_available, is_tf_available, is_torch_available, is_torch_fx_proxy\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/import_utils.py\", line 32, in <module>\r\n from . import logging\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/logging.py\", line 35, in <module>\r\n import huggingface_hub.utils as hf_hub_utils\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/huggingface_hub/utils/__init__.py\", line 32, in <module>\r\n from ._errors import (\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py\", line 3, in <module>\r\n from requests import HTTPError, Response\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/__init__.py\", line 45, in <module>\r\n from .exceptions import RequestsDependencyWarning\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/exceptions.py\", line 9, in <module>\r\n from .compat import JSONDecodeError as CompatJSONDecodeError\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py\", line 13, in <module>\r\n import charset_normalizer as chardet\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/__init__.py\", line 23, in <module>\r\n from charset_normalizer.api import from_fp, from_path, from_bytes, normalize\r\n File \"/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/api.py\", line 10, in <module>\r\n from charset_normalizer.md import mess_ratio\r\n File \"charset_normalizer/md.py\", line 5, in <module>\r\nImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/constant.py)\r\n```\r\n\r\n```code\r\n(huggingface-bug-test) anisaha1:~$ cat /etc/os-release\r\nNAME=\"Red Hat Enterprise Linux\"\r\nVERSION=\"8.7 (Ootpa)\"\r\nID=\"rhel\"\r\nID_LIKE=\"fedora\"\r\nVERSION_ID=\"8.7\"\r\nPLATFORM_ID=\"platform:el8\"\r\nPRETTY_NAME=\"Red Hat Enterprise Linux 8.7 (Ootpa)\"\r\nANSI_COLOR=\"0;31\"\r\nCPE_NAME=\"cpe:/o:redhat:enterprise_linux:8::baseos\"\r\nHOME_URL=\"https://www.redhat.com/\"\r\nDOCUMENTATION_URL=\"https://access.redhat.com/documentation/red_hat_enterprise_linux/8/\"\r\nBUG_REPORT_URL=\"https://bugzilla.redhat.com/\"\r\n\r\nREDHAT_BUGZILLA_PRODUCT=\"Red Hat Enterprise Linux 8\"\r\nREDHAT_BUGZILLA_PRODUCT_VERSION=8.7\r\nREDHAT_SUPPORT_PRODUCT=\"Red Hat Enterprise Linux\"\r\nREDHAT_SUPPORT_PRODUCT_VERSION=\"8.7\"\r\n\r\n(base) anisaha1:~$ uname -r\r\n4.18.0-425.13.1.el8_7.x86_64\r\n```\r\n\r\n3. https://github.com/psf/requests/issues/6384",
"`python -m pip install charset-normalizer==2.1.0`\r\n\r\nsolves the issue",
"> Yes, we can try to have it solve in requests first indeed. It's if that takes too much time or is not deemed important we should fix it in hf hub.\r\n> \r\n> @ani0075saha Could you try the two lines given by Wauplin and do step 2 and 3?\r\n\r\nHi @sgugger @Wauplin, the issue I made in requests library was closed. Any thoughts on next steps?",
"You can try opening an issue at [charset_normalizer](https://github.com/Ousret/charset_normalizer) and point out that their 3.1.0 release seems broken on MacOS (but 2.1.0 works apparently, from the comment above).\r\n\r\nFrom your traceback, the simple line `import charset_normalizer` should fail in your environment (it doesn't in mine, but I'm not on MacOS).",
"I got the above error and did `python -m pip install charset-normalizer==2.1.0`. This gave me another error which went away after doing `pip install chardet `. \r\n\r\nThe error after 2.1.0 was as below but it was solved. I'm using M2 MAX and the packages below.\r\n\r\n`ImportError: cannot import name 'KO_NAMES' from 'charset_normalizer.constant' (/opt/anaconda3/envs/mlenv/lib/python3.8/site-packages/charset_normalizer/constant.py)`\r\n\r\nPackage Version\r\n------------------------ ----------\r\nanyio 3.6.2\r\nappnope 0.1.2\r\nargon2-cffi 21.3.0\r\nargon2-cffi-bindings 21.2.0\r\narrow 1.2.3\r\nasttokens 2.0.5\r\nattrs 22.2.0\r\nbackcall 0.2.0\r\nbeautifulsoup4 4.11.2\r\nbleach 6.0.0\r\nbrotlipy 0.7.0\r\ncertifi 2022.12.7\r\ncffi 1.15.1\r\nchardet 5.1.0\r\ncharset-normalizer 2.1.0\r\nclick 8.1.3\r\ncomm 0.1.2\r\ncontourpy 1.0.7\r\ncryptography 39.0.1\r\ncycler 0.11.0\r\ndebugpy 1.6.6\r\ndecorator 5.1.1\r\ndefusedxml 0.7.1\r\nexecuting 0.8.3\r\nfastjsonschema 2.16.2\r\nfilelock 3.9.0\r\nflit_core 3.6.0\r\nfonttools 4.39.0\r\nfqdn 1.5.1\r\nfuture 0.18.2\r\ngmpy2 2.1.2\r\nhuggingface-hub 0.12.1\r\nidna 3.4\r\nimportlib-metadata 6.0.0\r\nimportlib-resources 5.12.0\r\nipykernel 6.21.2\r\nipython 8.10.0\r\nipython-genutils 0.2.0\r\nipywidgets 8.0.4\r\nisoduration 20.11.0\r\njedi 0.18.1\r\nJinja2 3.1.2\r\njoblib 1.2.0\r\njsonpointer 2.3\r\njsonschema 4.17.3\r\njupyter 1.0.0\r\njupyter_client 8.0.3\r\njupyter-console 6.6.1\r\njupyter_core 5.2.0\r\njupyter-events 0.6.3\r\njupyter_server 2.3.0\r\njupyter_server_terminals 0.4.4\r\njupyterlab-pygments 0.2.2\r\njupyterlab-widgets 3.0.5\r\nkiwisolver 1.4.4\r\nMarkupSafe 2.1.2\r\nmatplotlib 3.7.1\r\nmatplotlib-inline 0.1.6\r\nmistune 2.0.5\r\nmkl-fft 1.3.1\r\nmkl-random 1.2.2\r\nmkl-service 2.4.0\r\nmpmath 1.3.0\r\nnbclassic 0.5.2\r\nnbclient 0.7.2\r\nnbconvert 7.2.9\r\nnbformat 5.7.3\r\nnest-asyncio 1.5.6\r\nnetworkx 3.0\r\nnltk 3.8.1\r\nnotebook 6.5.2\r\nnotebook_shim 0.2.2\r\nnumpy 1.23.5\r\npackaging 23.0\r\npandas 1.5.3\r\npandocfilters 1.5.0\r\nparso 0.8.3\r\npexpect 4.8.0\r\npickleshare 0.7.5\r\nPillow 9.4.0\r\npip 22.3.1\r\npkgutil_resolve_name 1.3.10\r\nplatformdirs 3.0.0\r\nportalocker 2.7.0\r\nprometheus-client 0.16.0\r\nprompt-toolkit 3.0.36\r\npsutil 5.9.4\r\nptyprocess 0.7.0\r\npure-eval 0.2.2\r\npycparser 2.21\r\nPygments 2.11.2\r\npyOpenSSL 23.0.0\r\npyparsing 3.0.9\r\npyrsistent 0.19.3\r\nPySocks 1.7.1\r\npython-dateutil 2.8.2\r\npython-json-logger 2.0.7\r\npytorch-crf 0.7.2\r\npytz 2023.2\r\nPyYAML 6.0\r\npyzmq 25.0.0\r\nqtconsole 5.4.0\r\nQtPy 2.3.0\r\nregex 2022.10.31\r\nrequests 2.28.2\r\nrfc3339-validator 0.1.4\r\nrfc3986-validator 0.1.1\r\nscikit-learn 1.2.2\r\nscikit-plot 0.3.7\r\nscipy 1.10.1\r\nseaborn 0.12.2\r\nSend2Trash 1.8.0\r\nsentence-transformers 2.2.2\r\nsentencepiece 0.1.97\r\nsetuptools 65.6.3\r\nsix 1.16.0\r\nsklearn 0.0.post1\r\nsniffio 1.3.0\r\nsoupsieve 2.4\r\nstack-data 0.2.0\r\nsympy 1.11.1\r\nterminado 0.17.1\r\nthreadpoolctl 3.1.0\r\ntinycss2 1.2.1\r\ntokenizers 0.12.1\r\ntorch 2.0.0\r\ntorchaudio 2.0.0\r\ntorchdata 0.6.0\r\ntorchtext 0.13.0\r\ntorchvision 0.15.0\r\ntornado 6.2\r\ntqdm 4.64.1\r\ntraitlets 5.7.1\r\ntransformers 4.27.4\r\ntyping_extensions 4.4.0\r\nuri-template 1.2.0\r\nurllib3 1.26.15\r\nwcwidth 0.2.5\r\nwebcolors 1.12\r\nwebencodings 0.5.1\r\nwebsocket-client 1.5.1\r\nwheel 0.38.4\r\nwidgetsnbextension 4.0.5\r\nzipp 3.14.0",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am currently having this issue with one anaconda environment and not another. This is very confusing.",
"Actually I was getting the same error **when importing OpenAI** but that was solved by **upgrading the package**\r\n\r\n`pip install --upgrade chardet`\r\n\r\nAnd that worked! ",
"> ### System Info\r\n> macbook air m2 with anaconda, python 3.9\r\n> \r\n> I got a similar bug 🐛 `ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant'`\r\n> \r\n> When I encountered this I used:\r\n> \r\n> ```\r\n> pip install chardet\r\n> ```\r\n\r\nIt's working for me. You are the best."
] | 1,677
| 1,700
| 1,683
|
NONE
| null |
### System Info
macbook pro m2 with anaconda, python 3.9
I'm running transformers on an m1 mac and am getting the following error when I import
`from transformers import OwlViTProcessor, OwlViTForObjectDetection`
File ~/opt/anaconda3/envs/nd1/lib/python3.9/site-packages/transformers/__init__.py:26
23 from typing import TYPE_CHECKING
25 # Check the dependencies satisfy the minimal versions required.
---> 26 from . import dependency_versions_check
27 from .utils import (
28 OptionalDependencyNotAvailable,
29 _LazyModule,
(...)
42 logging,
43 )
46 logger = logging.get_logger(__name__) # pylint: disable=invalid-name
File ~/opt/anaconda3/envs/nd1/lib/python3.9/site-packages/transformers/dependency_versions_check.py:36
33 if pkg in deps:
34 if pkg == "tokenizers":
35 # must be loaded here, or else tqdm check may fail
---> 36 from .utils import is_tokenizers_available
...
---> 10 from charset_normalizer.md import mess_ratio
11 from charset_normalizer.models import CharsetMatches, CharsetMatch
12 from warnings import warn
AttributeError: partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Install transformers on a macbook m2
from transformers import OwlViTProcessor, OwlViTForObjectDetection
### Expected behavior
it should import but instaed it gives the above error message
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21858/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21857
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21857/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21857/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21857/events
|
https://github.com/huggingface/transformers/pull/21857
| 1,603,774,645
|
PR_kwDOCUB6oc5K9-u7
| 21,857
|
Flax beam search fix
|
{
"login": "andyehrenberg",
"id": 32784181,
"node_id": "MDQ6VXNlcjMyNzg0MTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/32784181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyehrenberg",
"html_url": "https://github.com/andyehrenberg",
"followers_url": "https://api.github.com/users/andyehrenberg/followers",
"following_url": "https://api.github.com/users/andyehrenberg/following{/other_user}",
"gists_url": "https://api.github.com/users/andyehrenberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andyehrenberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyehrenberg/subscriptions",
"organizations_url": "https://api.github.com/users/andyehrenberg/orgs",
"repos_url": "https://api.github.com/users/andyehrenberg/repos",
"events_url": "https://api.github.com/users/andyehrenberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/andyehrenberg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Makes it so you can pass `decoder_attention_mask` into `model.generate` for flax models when doing beam search. This is helpful for models like Whisper where there may be variable length decoder prefixes across a batch, so you'd have to define a `decoder_attention_mask`.
@sanchit-gandhi
@sgugger
@frmccann97
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21857/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21857/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21857",
"html_url": "https://github.com/huggingface/transformers/pull/21857",
"diff_url": "https://github.com/huggingface/transformers/pull/21857.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21857.patch",
"merged_at": 1677666333000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21856
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21856/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21856/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21856/events
|
https://github.com/huggingface/transformers/pull/21856
| 1,603,699,962
|
PR_kwDOCUB6oc5K9u0V
| 21,856
|
Add an utility file to get information from test files
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> It's all looking great! Can you add a couple of unit tests in `tests/repo_utils` for this file? It may help you catch some bugs and make sure any future PRs don't break anything.\r\n\r\nHi! I added a test file under `tests/repo_util/`, but as you know, the newly introduced methods require some libraries to be there, so we can import modules and get the list of model tester/test/classes. Therefore we can't test against some expected values on CircleCI `repo_utils_job`, where even `torch` or `vision` is not there.\r\n\r\nDo you have any suggestion, say, moving this new test file outside `tests/repo_util/`?\r\n\r\n~~Or maybe I should create some dummy model/test/tester classes dynamically inside the test file, and use them for testing?~~"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
**Add an utility file to get information from test files.**
There are more places where we need to access the information contained in test files:
- tiny model creation script: need model tester in order to create (tiny) configuration
- pipeline testing:
- need some information so we can create `pipeline_model_mapping` in a systematic way (to avoid human error, to avoid time consuming manual edits) from the existing `model_mapping` and `tf_model_mapping` under `xxxPipelineTests` classes (which use AUTO mappings)
- need information so we can develop some checks to make sure no pipeline tests are missing
I think it's good if we have a centralized place (file) providing ways to get this information, therefore this PR comes.
This new file will be under development.
## One example usage
### code snippet
```
test_file = "tests/models/blip/test_modeling_blip.py"
test_file = f"{os.path.sep}".join(test_file.split("/"))
model_test_mapping = get_model_to_test_mapping(test_file)
model_tester_mapping = get_model_to_tester_mapping(test_file)
print(json.dumps(to_json(model_test_mapping), indent=4))
print(json.dumps(to_json(model_tester_mapping), indent=4))
```
### model to test classes
```python
{
"BlipForConditionalGeneration": [
"BlipTextImageModelTest"
],
"BlipForImageTextRetrieval": [
"BlipTextRetrievalModelTest"
],
"BlipForQuestionAnswering": [
"BlipTextImageModelTest",
"BlipVQAModelTest"
],
"BlipModel": [
"BlipModelTest"
],
"BlipTextModel": [
"BlipTextModelTest"
],
"BlipVisionModel": [
"BlipVisionModelTest"
]
}
```
### model to tester classes
```python
{
"BlipForConditionalGeneration": [
"BlipTextImageModelsModelTester"
],
"BlipForImageTextRetrieval": [
"BlipTextRetrievalModelTester"
],
"BlipForQuestionAnswering": [
"BlipModelTester",
"BlipTextImageModelsModelTester"
],
"BlipModel": [
"BlipModelTester"
],
"BlipTextModel": [
"BlipTextModelTester"
],
"BlipVisionModel": [
"BlipVisionModelTester"
]
}
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21856/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21856",
"html_url": "https://github.com/huggingface/transformers/pull/21856",
"diff_url": "https://github.com/huggingface/transformers/pull/21856.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21856.patch",
"merged_at": 1677689610000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21855
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21855/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21855/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21855/events
|
https://github.com/huggingface/transformers/pull/21855
| 1,603,639,071
|
PR_kwDOCUB6oc5K9hoj
| 21,855
|
Move common properties to BackboneMixin
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,680
| 1,680
|
COLLABORATOR
| null |
# What does this PR do?
First of a series of PRs to enable loading timm checkpoints using the `AutoBackbone` API. This PR moves common logic e.g. `channels` property to the `BackboneMixin` class. This is for two main reasons:
* Reduce duplicated code
* Enable using similar logic across the transformer and timm backbones
## Series of PRs
- [x] Moving common logic into the `BackboneMixin` class (this PR)
- [ ] Add `out_indices` to backbones - [PR](https://github.com/amyeroberts/transformers/pull/109/files)
Note: This is an optional design choice and not necessary for loading the timm backbones
- [ ] Add tests for backbone models - [PR](https://github.com/amyeroberts/transformers/pull/110/files)
- [ ] Add `TimmBackbone` model that can be loaded through `AutoBackbone` - [PR](https://github.com/amyeroberts/transformers/pull/111/files)
This is where all the important stuff happens 🪄
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21855/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21855",
"html_url": "https://github.com/huggingface/transformers/pull/21855",
"diff_url": "https://github.com/huggingface/transformers/pull/21855.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21855.patch",
"merged_at": 1680167052000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21854
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21854/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21854/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21854/events
|
https://github.com/huggingface/transformers/issues/21854
| 1,603,596,695
|
I_kwDOCUB6oc5flPGX
| 21,854
|
Running squad with GPT-J-6B fails due to issue in tokenizer.
|
{
"login": "jojivk73",
"id": 14943401,
"node_id": "MDQ6VXNlcjE0OTQzNDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/14943401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jojivk73",
"html_url": "https://github.com/jojivk73",
"followers_url": "https://api.github.com/users/jojivk73/followers",
"following_url": "https://api.github.com/users/jojivk73/following{/other_user}",
"gists_url": "https://api.github.com/users/jojivk73/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jojivk73/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jojivk73/subscriptions",
"organizations_url": "https://api.github.com/users/jojivk73/orgs",
"repos_url": "https://api.github.com/users/jojivk73/repos",
"events_url": "https://api.github.com/users/jojivk73/events{/privacy}",
"received_events_url": "https://api.github.com/users/jojivk73/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This example does not support GPT-J out of the box since GPT-J has no CLS token (compared to BERT or XLNet). You will need to adapt the preprocessing as a result.",
"@sgugger Can you please elaborate.\r\nI tired using distillbert tokenizer and it is running. \r\nI am not sure if that is good.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
NONE
| null |
### System Info
Hi,
I am tryiing to use the run_qa.py script under examples/tensorflow/question-answering
Model EleutherAI/gpt-j-6B.
dataset squad.
it fails as below.

### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install transformer from source
2. python examples/tensorflow/question-answerin/run_qa.py with --model_name EleutherAI/gpt-j-6B.
--- \
--dataset_name squad --do_train --do_eval
### Expected behavior
The script to run the finetuning task
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21854/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21853
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21853/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21853/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21853/events
|
https://github.com/huggingface/transformers/pull/21853
| 1,603,502,761
|
PR_kwDOCUB6oc5K9DZv
| 21,853
|
[GPT2] Propose fix for #21080
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'd like to summon the generation expert @gante to ask what he thinks of the PR",
"The summon has been heard 📣 \r\n\r\n### TL;DR\r\n\r\nI approve the change 👍 But we need to have a second look at the position embeddings, I suspect there are several bugs in the codebase (see why below)\r\n\r\n### Context\r\nThis PR actually took me on a long trip, whose findings I summarize below:\r\n1. I saw that @ArthurZucker wrote `this would only affect 2 models`, and my first thought was `what about GPT-J`?\r\n2. Then I saw #21869 (remove `position_ids` input from GPT-J). From the TF XLA `.generate` transition, I remember that getting the `position_ids` to work with GPT-J was a nice piece of work, and I remember that it made a difference in the outputs. See the example below\r\n\r\n<details>\r\n <summary>GPT-J + positions_ids</summary>\r\n \r\n ```py\r\n from transformers import TFAutoModelForCausalLM, AutoTokenizer\r\n import tensorflow as tf\r\n \r\n tf.keras.backend.set_floatx('float16')\r\n \r\n tok = AutoTokenizer.from_pretrained(\"EleutherAI/gpt-j-6B\", padding_side=\"left\")\r\n model = TFAutoModelForCausalLM.from_pretrained(\"EleutherAI/gpt-j-6B\", revision=\"float16\", from_pt=True)\r\n tok.pad_token = tok.eos_token\r\n model.generation_config.pad_token_id = model.generation_config.eos_token_id\r\n \r\n inputs = tok([\"and the prime minister\"], return_tensors=\"tf\", padding=True)\r\n out_1 = model(**inputs)\r\n out_2 = model(**inputs)\r\n \r\n position_ids = tf.math.cumsum(inputs.attention_mask, axis=-1, exclusive=True)\r\n out_3 = model(**inputs, position_ids=position_ids + 10)\r\n \r\n print(tf.reduce_max(tf.abs(out_1.logits[:, -1, :] - out_2.logits[:, -1, :]))) # tf.Tensor(0.0, shape=(), dtype=float16)\r\n print(tf.reduce_max(tf.abs(out_1.logits[:, -1, :] - out_3.logits[:, -1, :]))) # tf.Tensor(0.01563, shape=(), dtype=float16)\r\n ```\r\n</details>\r\n\r\n\r\n3. Despite the above, and looking at this PR, I think what @ArthurZucker wrote here is the way to go. In `prepare_inputs_for_generation`, we were computing `position_ids` from the `attention_mask` anyways. If we pass the logic from there to the forward pass, we cut a source of bugs (users trying to generate without `.generate()`) 👍 \r\n4. GPT-J needs to be fixed (see TF/FLAX) :p To be clear, the issue is long-standing and not a result of #21869 !\r\n5. We should double-check at least the main models. Some models, like `OPT`, are okay, as they compute the position embedding directly from the attention mask. Others, like `Codegen`, suffer from the same problem as `GPT-J`.\r\n \r\n",
"@gante I was coincidentally just having this problem with codegen and so have opened #22069 following your hints above.",
"Thanks for the in depth review @gante ! ",
"Will re-open this with a fix for the cross PT-TF tests. The TF code has to be modified as otherwise the default positional ids are wrong. ",
"Sorry for missing that the tf version also needed an update on this one! ",
"Thank you @ArthurZucker . No worry - we didn't detect this because the PT/TF cross tests in the corresponding TF model test files are not fetched by the test fetcher script. The current version will only detect the tests in the (modified + involved indirectly) PyTorch modeling/test files."
] | 1,677
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Propose fix for #21080.
Modify the default generation of the positional ids for the `gpt-2` as well as the `decidion_transformer` model.
In the issue @LysandreJik proposed to update the doc, but when I checked it seemed like this would only affect 2 models, and is backward compatible:
- potential impact is only for people that where using batched padded input when generating with a single sentence. This means that their output will now be corrected
- default behavior is kept if not attention mask is given.
If this is not acceptable, I am also glad to add a warning when creating the positional ids.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21853/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21853",
"html_url": "https://github.com/huggingface/transformers/pull/21853",
"diff_url": "https://github.com/huggingface/transformers/pull/21853.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21853.patch",
"merged_at": 1678450526000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21852
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21852/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21852/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21852/events
|
https://github.com/huggingface/transformers/pull/21852
| 1,603,488,558
|
PR_kwDOCUB6oc5K9ARg
| 21,852
|
TrOCR comment change
|
{
"login": "AvishekMondalQC",
"id": 120577659,
"node_id": "U_kgDOBy_eew",
"avatar_url": "https://avatars.githubusercontent.com/u/120577659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AvishekMondalQC",
"html_url": "https://github.com/AvishekMondalQC",
"followers_url": "https://api.github.com/users/AvishekMondalQC/followers",
"following_url": "https://api.github.com/users/AvishekMondalQC/following{/other_user}",
"gists_url": "https://api.github.com/users/AvishekMondalQC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AvishekMondalQC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AvishekMondalQC/subscriptions",
"organizations_url": "https://api.github.com/users/AvishekMondalQC/orgs",
"repos_url": "https://api.github.com/users/AvishekMondalQC/repos",
"events_url": "https://api.github.com/users/AvishekMondalQC/events{/privacy}",
"received_events_url": "https://api.github.com/users/AvishekMondalQC/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi,\r\n\r\nThanks for your PR! We actually test that code snippet as TrOCR is [included in the doc tests](https://github.com/huggingface/transformers/blob/b29e2dcaff114762e65eaea739ba1076fc5d1c84/utils/documentation_tests.txt#L190). So the code runs fine. Of course, if you would use that model in combination with TrOCRProcessor, you would have to instantiate the image processor's size to be {\"height\": 224, \"width\": 224}, like so:\r\n\r\n```\r\nfrom transformers import RobertaTokenizer, ViTImageProcessor, TrOCRProcessor, ViTConfig, TrOCRConfig, VisionEncoderDecoderModel\r\n\r\ntokenizer = RobertaTokenizer.from_pretrained(\"roberta-base\")\r\nimage_processor = ViTImageProcessor(size={\"height\": 224, \"width\": 224})\r\nprocessor = TrOCRProcessor(tokenizer=tokenizer, image_processor=image_processor)\r\n\r\nconfig_encoder = ViTConfig()\r\nconfig_decoder = TrOCRConfig()\r\n\r\nconfig = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)\r\nmodel = VisionEncoderDecoderModel(config=config)\r\n```",
"Thanks for your quick reply! \r\n\r\nI believe that particular bit of code in the docstring runs fine because `model` gets overwritten a couple of lines below it. Do you think it is worth it to put the lines of code you've written in your comment in the docstring as well in case someone wants to try to train a model without the weights from microsoft? \r\n\r\nIf not, it's not a big deal, I'll just close this PR."
] | 1,677
| 1,678
| 1,678
|
NONE
| null |
Thanks for the TrOCR modules @NielsRogge !
I noticed something very small in one of the comments - if you copy pasted the lines from the comment block, they wouldn't work, because the default image size in `ViTConfig` is 224, while the `processor` is expecting the image to be resized to 384x384.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21852/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21852",
"html_url": "https://github.com/huggingface/transformers/pull/21852",
"diff_url": "https://github.com/huggingface/transformers/pull/21852.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21852.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21851
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21851/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21851/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21851/events
|
https://github.com/huggingface/transformers/pull/21851
| 1,603,474,348
|
PR_kwDOCUB6oc5K89Pd
| 21,851
|
[WIP] Flax pipeline support :pickup_truck:
|
{
"login": "Shubhamai",
"id": 51819922,
"node_id": "MDQ6VXNlcjUxODE5OTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/51819922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shubhamai",
"html_url": "https://github.com/Shubhamai",
"followers_url": "https://api.github.com/users/Shubhamai/followers",
"following_url": "https://api.github.com/users/Shubhamai/following{/other_user}",
"gists_url": "https://api.github.com/users/Shubhamai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shubhamai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shubhamai/subscriptions",
"organizations_url": "https://api.github.com/users/Shubhamai/orgs",
"repos_url": "https://api.github.com/users/Shubhamai/repos",
"events_url": "https://api.github.com/users/Shubhamai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shubhamai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21851). All of your documentation changes will be reflected on that endpoint.",
"@Narsil , @sanchit-gandhi It looks good for an initial review. @sanchit-gandhi I am not also super confident if I have gotten the `jit` part right, I would appreciate if you can take a look and given any feedbacks on that. \r\n\r\nThanks a lot for your time. ",
"Awesome to see this! Nice timing. automatic_speech_recognition would be cool to have too with `FlaxWhisper` that was merged recently!",
"Thanks for your PR @Shubhamai . At this stage, we do not have plans to have a pipeline in Flax similar to the ones in PyTorch and TensorFlow, which is why the original PR from @Narsil was not continued. There are several reasons for that:\r\n\r\n1. The `pipeline` object is aimed at software engineers not necessarily familiar with machine learning, and Flax users are more researchers\r\n2. `pipeline`s are meant to quickly try out a task, which does not work well in Flax/JAX where you have to compile and jit to get nice performance.\r\n\r\nWe can leave the PR open so that users try out your branch if they want something like a pipeline for Flax, but we don't want to commit to maintain this code, so we won't merge it in the main branch. What we might add in the future is a way to use large models on TPUs with Jax in a pipeline.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds flax support in pipelines + corresponding flax changes of https://github.com/huggingface/transformers/pull/21516 by [ydshieh](https://github.com/ydshieh)
> Right before I was preparing to open pull request, with pretty much everything ready for review (including tests, ~ all tasks working), I was looking for issues this PR will fix, accidently, I found a PR [[WIP] Adding support for flax for pipelines.](https://github.com/huggingface/transformers/pull/14356/) (nearly 1-2 year old) by [Narsil](https://github.com/Narsil) previously unknown to me. I am adding this to alleviate any confusion, and after the commit `bug fixes & multiple tasks support`, I still continued with this PR since (I guess) a lot of `transformers` codebase has changed/updated, but I did used narsil's PR to add anything I missed in this PR, and to make it more polished.
> So, I deeply thanks narsil and patrickvonplaten for the PR & review of the narsil's PR, as it helped this PR become more polished, and apologies as I should have checked if any similar PR is WIP.
*Not mentioning due to currently in WIP*
## Examples
```py
# Image Classifcation
vision_classifier = pipeline(task="image-classification", framework="flax", device="cuda:0") #cpu:0
print(vision_classifier(images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"))
# Translation
en_fr_translator = pipeline("translation_en_to_fr", framework="flax")
print(en_fr_translator("How old are you?"))
# Image to Text
captioner = pipeline(model="ydshieh/vit-gpt2-coco-en", framework="flax")
print(captioner("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"))
```
## Currently Supported tasks
Task |PT| TF| Flax| Reason for no Flax support|
|---|---|---|---|---|
audio_classification |:heavy_check_mark: |:x:| :x: |`AudioClassificationPipeline` is only available in PyTorch.|
automatic_speech_recognition| :heavy_check_mark:| :x:| :x: |The `AutomaticSpeechRecognitionPipeline` is only available in PyTorch.|
conversational |:heavy_check_mark: |:heavy_check_mark:| :heavy_check_mark:|
depth_estimation| :heavy_check_mark:| :x:| :x: |No Flax model available.|
document_question_answering| :heavy_check_mark:| :x:| :x:| No `FlaxAutoModelForDocumentQuestionAnswering` class available.|
feature_extraction |:heavy_check_mark:| :heavy_check_mark: |:heavy_check_mark:|
fill_mask| :heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark: |
image_classification| :heavy_check_mark: |:heavy_check_mark: |:heavy_check_mark:|
image_segmentation| :heavy_check_mark:| :x: |:x: |No Flax model available.|
image_to_text| :heavy_check_mark: |:heavy_check_mark: |:heavy_check_mark:
object_detection| :heavy_check_mark:| :x: |:x:| No Flax model available.
question_answering| :heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark:
summarization| :heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark:
table_question_answering| :heavy_check_mark:| :heavy_check_mark:| :x:| No Flax model available.
text2text_generation| :heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark:
text_classification |:heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark:
text_generation |:heavy_check_mark: |:heavy_check_mark: |:heavy_check_mark:
token_classification| :heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark:
translation |:heavy_check_mark: |:heavy_check_mark:| :heavy_check_mark:
video_classification| :heavy_check_mark: |:x: |:x:| No Flax model available.
visual_question_answering| :heavy_check_mark: |:x:| :x: |No Flax model available.
zero_shot_classification| :heavy_check_mark: |:heavy_check_mark:| :heavy_check_mark:
zero_shot_object_detection| :heavy_check_mark: |:x:| :x: |No Flax model available.
## Custom models link used in testing
~Could probably fix this by adding `from_pt=True` in `model_kwargs` in testing.~ Nope, during flax testing, PyTorch doesn't seems to be available.
- [Shubhamai/distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/Shubhamai/distilbert-base-uncased-finetuned-sst-2-english) < [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)
- [Shubhamai/tiny-random-distilbert](https://huggingface.co/Shubhamai/tiny-random-distilbert) < [hf-internal-testing/tiny-random-distilbert](https://huggingface.co/hf-internal-testing/tiny-random-distilbert)
- [Shubhamai/tiny-random-vit](https://huggingface.co/Shubhamai/tiny-random-vit) < [hf-internal-testing/tiny-random-vit](https://huggingface.co/hf-internal-testing/tiny-random-vit)
- [Shubhamai/tiny-mbart](https://huggingface.co/Shubhamai/tiny-mbart) < [sshleifer/tiny-mbart](https://huggingface.co/)
- [Shubhamai/tiny-bert-for-token-classification](https://huggingface.co/Shubhamai/tiny-bert-for-token-classification) < [hf-internal-testing/tiny-bert-for-token-classification](https://huggingface.co/hf-internal-testing/tiny-bert-for-token-classification)
- [Shubhamai/tiny-random-clip-zero-shot-image-classification](https://huggingface.co/Shubhamai/tiny-random-clip-zero-shot-image-classification) < [hf-internal-testing/tiny-random-clip-zero-shot-image-classification](https://huggingface.co/hf-internal-testing/tiny-random-clip-zero-shot-image-classification)
- [Shubhamai/tiny-distilbert-base-cased-distilled-squad](https://huggingface.co/Shubhamai/tiny-distilbert-base-cased-distilled-squad) < [sshleifer/tiny-distilbert-base-cased-distilled-squad](https://huggingface.co/sshleifer/tiny-distilbert-base-cased-distilled-squad)
## Few questions for maintainers and users.
- Should the framework name `pipeline(..., framework=)` be `jax` or `flax`, in this PR I used `flax` because we have been using them as a prefix in flax models, although we use `jax` as the alias in tokenizer functions, and in Huggingface Hub, so I am unable a make a final decision on this.
- If we use `framework="flax"`, a bit of a inconvenient code emerges, `self.tokenizer(inputs, return_tensors=self.framework if self.framework != "flax" else "jax")`, because tokenizer/image preprocessor doesn't identify `flax` framework, we have to change it to `jax`. Although it's a very small code, I get a bit uncomfortable presenting it, which could be a source of bugs. Solution would either rename `framework` to jax or add flax alias in tokenizer/image preprocessor, which function's exactly as jax.
- JIT the model by default or another function argument?
Fixes
- https://github.com/huggingface/transformers/issues/12627
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- flax: sanchit-gandhi
- pipelines: Narsil
## TODO
- [ ] Profiling & Benchmarking.
- [ ] Updating docs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21851/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21851",
"html_url": "https://github.com/huggingface/transformers/pull/21851",
"diff_url": "https://github.com/huggingface/transformers/pull/21851.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21851.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21850
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21850/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21850/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21850/events
|
https://github.com/huggingface/transformers/issues/21850
| 1,603,461,149
|
I_kwDOCUB6oc5fkuAd
| 21,850
|
ValueError: Please make sure you have `sentencepiece` installed in order to use this tokenizer.
|
{
"login": "tatakof",
"id": 18502770,
"node_id": "MDQ6VXNlcjE4NTAyNzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/18502770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tatakof",
"html_url": "https://github.com/tatakof",
"followers_url": "https://api.github.com/users/tatakof/followers",
"following_url": "https://api.github.com/users/tatakof/following{/other_user}",
"gists_url": "https://api.github.com/users/tatakof/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tatakof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tatakof/subscriptions",
"organizations_url": "https://api.github.com/users/tatakof/orgs",
"repos_url": "https://api.github.com/users/tatakof/repos",
"events_url": "https://api.github.com/users/tatakof/events{/privacy}",
"received_events_url": "https://api.github.com/users/tatakof/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Following the steps you indicate to reproduce does not give me any error in Colab. Is it possible you installed sentencepiece after installing transformers and did not restart your environment?",
"You are right, thanks @sgugger \r\n(reloading google colab didn't fix my issue but creating a new colab did the trick)"
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <Yes>
- Using distributed or parallel set-up in script?: <No>
### Who can help?
@Narsil @ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run in colab the example in this documentation https://huggingface.co/tasks/translation:
```
!pip install transformers transformers[sentencepiece]
```
```
from transformers import pipeline
model_checkpoint = "Helsinki-NLP/opus-mt-en-fr"
translator = pipeline("translation", model=model_checkpoint)
translator("How are you?")
# [{'translation_text': 'Comment allez-vous ?'}]
```
### Expected behavior
output
```
# [{'translation_text': 'Comment allez-vous ?'}]
```
as signaled in this documentation https://huggingface.co/tasks/translation
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21850/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21849
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21849/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21849/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21849/events
|
https://github.com/huggingface/transformers/pull/21849
| 1,603,394,908
|
PR_kwDOCUB6oc5K8r_3
| 21,849
|
[ConvBert] Fix #21523
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #21523, the invalid reshaping of the context layer and adds a test to make sure we support different head ratios.
Made sur that the slow test all pass.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21849/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21849",
"html_url": "https://github.com/huggingface/transformers/pull/21849",
"diff_url": "https://github.com/huggingface/transformers/pull/21849.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21849.patch",
"merged_at": 1677665464000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21848
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21848/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21848/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21848/events
|
https://github.com/huggingface/transformers/issues/21848
| 1,603,087,839
|
I_kwDOCUB6oc5fjS3f
| 21,848
|
Token classification for a non-textual data
|
{
"login": "vitalyshalumov",
"id": 33824221,
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitalyshalumov",
"html_url": "https://github.com/vitalyshalumov",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You should probably ask this open question on the [forums](https://discuss.huggingface.co/).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.10.2+cu113 (True)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm looking for an implementation of an architecture that performs token classification, but the input is not an integer that represents the vocabulary but a vector of numbers.
Basically, each token in the input is represented by a vector. Each token is already an embeddings vector.
How can this be achieved?
Best,
Vitaly
### Expected behavior
Input vector of size 768 for each token. A sequence of such tokens of up to 512.
Maybe it is as simple as removing the layer
(word_embeddings): Embedding(50265, 768, padding_idx=1)?
In any case a link to the solution would be most helpful.
Best,
Vitaly
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21848/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21847
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21847/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21847/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21847/events
|
https://github.com/huggingface/transformers/pull/21847
| 1,602,956,188
|
PR_kwDOCUB6oc5K7MVL
| 21,847
|
fsdp bf16 enable autocast
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hello @sgugger, officially, there is not much info on `mixed_precision` support of FSDP. In the official docs here https://pytorch.org/docs/stable/fsdp.html, it doesn't mention anything regarding `bf16` and `fp16` nuances. \r\n\r\nIn the official tutorial: https://github.com/lessw2020/transformer_central/tree/main/mixed_precision, for `bf16` they don't specify the need for autocasting. Here, it is mentioned the need for `ShardedGradScaler` for `fp16`. And the issue https://github.com/pytorch/pytorch/issues/75676 is still open. \r\n\r\nHowever, `bf16` support with FSDP does work for few models, notably T5 as mentioned in this github comment from my experiments: https://github.com/pytorch/pytorch/issues/79605#issuecomment-1184410231\r\n\r\nIn T5, attention probs are casted back to `bf16` explicitly by this line of code: https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L560-L562 . \r\n\r\nThis avoids the error that happens for BERT in #21560. With this observation, I just tried enabling autocast and observed no errors and expected performance. Hence, this PR. \r\n\r\n\r\n",
"using this PR when I run:\r\n```bash\r\ntorchrun --nproc_per_node=2 run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --overwrite_output_dir --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 2e-5 --num_train_epochs 3 --output_dir $TASK_NAME/ --fsdp \"full_shard auto_wrap\" --fsdp_config \"fsdp_config.json\" --bf16\r\n```\r\n\r\nwirh fsdp_config.json contents being:\r\n```\r\n{\r\n\t\"fsdp_transformer_layer_cls_to_wrap\": \"BertLayer\",\r\n\t\"fsdp_backward_prefetch\": \"backward_pre\",\r\n\t\"fsdp_forward_prefetch\": true,\r\n\t\"limit_all_gathers\": true\r\n}\r\n```\r\n\r\noutput logs:\r\n```\r\nwandb: Run summary:\r\nwandb: eval/accuracy 0.84804\r\nwandb: eval/combined_score 0.87039\r\nwandb: eval/f1 0.89273\r\nwandb: eval/loss 0.36461\r\n```\r\n\r\nwithout fsdp run gave below results:\r\n```\r\ntorchrun --nproc_per_node=2 run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --overwrite_output_dir --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 2e-5 --num_train_epochs 3 --output_dir $TASK_NAME/ --bf16\r\n```\r\n\r\n```\r\nwandb: Run summary:\r\nwandb: eval/accuracy 0.84804\r\nwandb: eval/combined_score 0.87057\r\nwandb: eval/f1 0.8931\r\nwandb: eval/loss 0.36857\r\n```\r\n\r\nSo, similar performance between them."
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
1. Fixes #21560 wrt FSDP integration.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21847/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21847",
"html_url": "https://github.com/huggingface/transformers/pull/21847",
"diff_url": "https://github.com/huggingface/transformers/pull/21847.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21847.patch",
"merged_at": 1677768488000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21846
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21846/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21846/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21846/events
|
https://github.com/huggingface/transformers/pull/21846
| 1,602,947,867
|
PR_kwDOCUB6oc5K7KcC
| 21,846
|
[time series] Add Time series inputs tests
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds tests to make sure that the appropriate inputs are being created for the time series transformer for the training and generation use-cases.
cc @NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21846/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21846",
"html_url": "https://github.com/huggingface/transformers/pull/21846",
"diff_url": "https://github.com/huggingface/transformers/pull/21846.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21846.patch",
"merged_at": 1677786216000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21845
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21845/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21845/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21845/events
|
https://github.com/huggingface/transformers/pull/21845
| 1,602,906,273
|
PR_kwDOCUB6oc5K7A8w
| 21,845
|
Copy models back to CPU before merging them after evaluation.
|
{
"login": "zolastro",
"id": 21045047,
"node_id": "MDQ6VXNlcjIxMDQ1MDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/21045047?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zolastro",
"html_url": "https://github.com/zolastro",
"followers_url": "https://api.github.com/users/zolastro/followers",
"following_url": "https://api.github.com/users/zolastro/following{/other_user}",
"gists_url": "https://api.github.com/users/zolastro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zolastro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zolastro/subscriptions",
"organizations_url": "https://api.github.com/users/zolastro/orgs",
"repos_url": "https://api.github.com/users/zolastro/repos",
"events_url": "https://api.github.com/users/zolastro/events{/privacy}",
"received_events_url": "https://api.github.com/users/zolastro/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your PR. This is done by the function `nested_numpify` on the line just below.",
"I see... I still have a problem when evaluating my model (out-of-memory). I will open an issue about it.\r\n\r\nI'm closing this pull request, thanks!"
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
# What does this PR do?
During evaluation, move the predictions from the `eval_accumulation_steps` to CPU before merging them to avoid an out-of-memory error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21845/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21845",
"html_url": "https://github.com/huggingface/transformers/pull/21845",
"diff_url": "https://github.com/huggingface/transformers/pull/21845.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21845.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21844
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21844/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21844/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21844/events
|
https://github.com/huggingface/transformers/pull/21844
| 1,602,848,627
|
PR_kwDOCUB6oc5K60d4
| 21,844
|
Fix gradient checkpointing bug BioGpt
|
{
"login": "saswatmeher",
"id": 35535056,
"node_id": "MDQ6VXNlcjM1NTM1MDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/35535056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saswatmeher",
"html_url": "https://github.com/saswatmeher",
"followers_url": "https://api.github.com/users/saswatmeher/followers",
"following_url": "https://api.github.com/users/saswatmeher/following{/other_user}",
"gists_url": "https://api.github.com/users/saswatmeher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saswatmeher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saswatmeher/subscriptions",
"organizations_url": "https://api.github.com/users/saswatmeher/orgs",
"repos_url": "https://api.github.com/users/saswatmeher/repos",
"events_url": "https://api.github.com/users/saswatmeher/events{/privacy}",
"received_events_url": "https://api.github.com/users/saswatmeher/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes issue #21737 for BioGpt.
cc @younesbelkada, @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21844/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21844",
"html_url": "https://github.com/huggingface/transformers/pull/21844",
"diff_url": "https://github.com/huggingface/transformers/pull/21844.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21844.patch",
"merged_at": 1677585386000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21843
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21843/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21843/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21843/events
|
https://github.com/huggingface/transformers/pull/21843
| 1,602,681,618
|
PR_kwDOCUB6oc5K6QBK
| 21,843
|
[`T5`] Fix torchquant issue
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can confirm t5 & int8 slow tests are passing, merging!"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #21839
This PR fixes a bug that was introduced with https://github.com/huggingface/transformers/pull/21281 - before this PR, the snippet below was working:
```python
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
model_name = "google/flan-t5-small"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
input_text = "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
output = model.generate(input_ids)
```
On the `main` branch the snippet does not work anymore due to the check `self.wo.weight.dtype` since ` torch.quantization.quantize_dynamic` converts all `nn.Linear` layers to a bound function, leading to an error.
Since the users were able to run this snippet on previous versions, I think that we should support this feature.
Added also a cool test for that
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21843/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21843",
"html_url": "https://github.com/huggingface/transformers/pull/21843",
"diff_url": "https://github.com/huggingface/transformers/pull/21843.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21843.patch",
"merged_at": 1677593385000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21842
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21842/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21842/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21842/events
|
https://github.com/huggingface/transformers/pull/21842
| 1,602,664,365
|
PR_kwDOCUB6oc5K6MVQ
| 21,842
|
Fix gradient checkpointing bug marian
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21842/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21842",
"html_url": "https://github.com/huggingface/transformers/pull/21842",
"diff_url": "https://github.com/huggingface/transformers/pull/21842.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21842.patch",
"merged_at": 1677771676000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21841
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21841/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21841/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21841/events
|
https://github.com/huggingface/transformers/pull/21841
| 1,602,660,765
|
PR_kwDOCUB6oc5K6Lj_
| 21,841
|
Fix gradient checkpointing bug M2M 100
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21841/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21841",
"html_url": "https://github.com/huggingface/transformers/pull/21841",
"diff_url": "https://github.com/huggingface/transformers/pull/21841.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21841.patch",
"merged_at": 1677771657000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21840
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21840/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21840/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21840/events
|
https://github.com/huggingface/transformers/pull/21840
| 1,602,655,258
|
PR_kwDOCUB6oc5K6KX8
| 21,840
|
Fix gradient checkpointing bug LED
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21840/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21840/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21840",
"html_url": "https://github.com/huggingface/transformers/pull/21840",
"diff_url": "https://github.com/huggingface/transformers/pull/21840.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21840.patch",
"merged_at": 1677771635000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21839
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21839/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21839/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21839/events
|
https://github.com/huggingface/transformers/issues/21839
| 1,602,652,897
|
I_kwDOCUB6oc5fhorh
| 21,839
|
quantize_dynamic on T5 model results in `AttributeError: 'function' object has no attribute 'dtype'`
|
{
"login": "gerhean",
"id": 16630400,
"node_id": "MDQ6VXNlcjE2NjMwNDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/16630400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gerhean",
"html_url": "https://github.com/gerhean",
"followers_url": "https://api.github.com/users/gerhean/followers",
"following_url": "https://api.github.com/users/gerhean/following{/other_user}",
"gists_url": "https://api.github.com/users/gerhean/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gerhean/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerhean/subscriptions",
"organizations_url": "https://api.github.com/users/gerhean/orgs",
"repos_url": "https://api.github.com/users/gerhean/repos",
"events_url": "https://api.github.com/users/gerhean/events{/privacy}",
"received_events_url": "https://api.github.com/users/gerhean/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello @gerhean \r\nThanks for the issue! \r\nhttps://github.com/huggingface/transformers/pull/21843 should fix your problem, you can already use the fix by checking out on the branch",
"Hi @gerhean \r\nNow it's on the `main` branch, if you should be able to use it without any issue!"
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Paste the following into colab:
```
!pip install transformers sentencepiece accelerate
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
model_name = "google/flan-t5-small"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
input_text = "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
output = model.generate(input_ids)
```
### Expected behavior
No Error
# Bug
```
AttributeError Traceback (most recent call last)
[<ipython-input-3-96b349bbc122>](https://localhost:8080/#) in <module>
1 input_text = "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
2 input_ids = tokenizer(input_text, return_tensors="pt").input_ids
----> 3 output = model.generate(input_ids)
10 frames
[/usr/local/lib/python3.8/dist-packages/transformers/models/t5/modeling_t5.py](https://localhost:8080/#) in forward(self, hidden_states)
314 # See https://github.com/huggingface/transformers/issues/20287
315 # we also make sure the weights are not in `int8` in case users will force `_keep_in_fp32_modules` to be `None``
--> 316 if hidden_states.dtype != self.wo.weight.dtype and self.wo.weight.dtype != torch.int8:
317 hidden_states = hidden_states.to(self.wo.weight.dtype)
318
AttributeError: 'function' object has no attribute 'dtype'
```
# Fix
When I commented out the lines 316-317 in `transformers/models/t5/modeling_t5.py`, the model runs.
`quantize_dynamic` converts `self.wo.weight` into a bound function which when called, returns the weights. Hence `self.wo.weight` is a function with no attribute `dtype`.
Bug is introduced by #21281
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21839/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21838
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21838/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21838/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21838/events
|
https://github.com/huggingface/transformers/issues/21838
| 1,602,459,016
|
I_kwDOCUB6oc5fg5WI
| 21,838
|
Unable to convert BioGpt slow tokenizer to fast: token out of vocabulary
|
{
"login": "seantaud",
"id": 77921574,
"node_id": "MDQ6VXNlcjc3OTIxNTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/77921574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seantaud",
"html_url": "https://github.com/seantaud",
"followers_url": "https://api.github.com/users/seantaud/followers",
"following_url": "https://api.github.com/users/seantaud/following{/other_user}",
"gists_url": "https://api.github.com/users/seantaud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seantaud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seantaud/subscriptions",
"organizations_url": "https://api.github.com/users/seantaud/orgs",
"repos_url": "https://api.github.com/users/seantaud/repos",
"events_url": "https://api.github.com/users/seantaud/events{/privacy}",
"received_events_url": "https://api.github.com/users/seantaud/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! It would be a bit difficult to say that this is a bug as we do not have an implementation of the `BioGPTConverter` which would be cool btw. In order to properly create a fast tokenizer you need to have the `pre_normalizer` the `decoder` etc. Take a look at `convert_slow_tokenizers` for more details! Feel free to open a PR and ping me 😉 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,683
| 1,683
|
NONE
| null |
### System Info
I was trying to use BioGpt model in my QA task for fine-tuning. I would like to construct a fast tokenizer class based on the BioGptTokenizer, so that I could use the offsets_mapping to know from which words the tokens do origin. But unfortunately, when creating a BiogptTokenizerFast from the PreTrainedTokenizerFast by `convert_slow_tokenizer`, following error occurs: Error while initializing BPE: Token `-@</w>` out of vocabulary.
#### Error trace
```
Traceback (most recent call last):
File "run.py", line 124, in <module>
trainer, predict_dataset = get_trainer(args)
File "***/tasks/qa/get_trainer.py", line 31, in get_trainer
tokenizer = BioGptTokenizerFast.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1801, in from_pretrained
return cls._from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1956, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "***/model/biogpt/tokenization_biogpt_fast.py", line 117, in __init__
super().__init__(
File "***/model/biogpt/tokenization_utils_fast.py", line 114, in __init__
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
File "***/model/biogpt/convert_slow_tokenizer.py", line 1198, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
File "***/model/biogpt/convert_slow_tokenizer.py", line 273, in converted
BPE(
Exception: Error while initializing BPE: Token `-@</w>` out of vocabulary
```
### Who can help?
@ArthurZucker @younesbelkada @kamalkraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I copy the code related to colab.This is the link : https://colab.research.google.com/drive/1IMhiDz45GiarBLgXG9B2rA_u0ZOmmjJS?usp=sharing
### Expected behavior
According to this issue [https://github.com/huggingface/transformers/issues/9290](url), this problem might be caused by some missing tokens in `vocab.json` or `merge.txt`. Could you please check it? Thank you very much!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21838/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21837
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21837/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21837/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21837/events
|
https://github.com/huggingface/transformers/issues/21837
| 1,602,406,039
|
I_kwDOCUB6oc5fgsaX
| 21,837
|
transformers == 4.26.0 has a bug
|
{
"login": "tongzhao315",
"id": 45264967,
"node_id": "MDQ6VXNlcjQ1MjY0OTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/45264967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tongzhao315",
"html_url": "https://github.com/tongzhao315",
"followers_url": "https://api.github.com/users/tongzhao315/followers",
"following_url": "https://api.github.com/users/tongzhao315/following{/other_user}",
"gists_url": "https://api.github.com/users/tongzhao315/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tongzhao315/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tongzhao315/subscriptions",
"organizations_url": "https://api.github.com/users/tongzhao315/orgs",
"repos_url": "https://api.github.com/users/tongzhao315/repos",
"events_url": "https://api.github.com/users/tongzhao315/events{/privacy}",
"received_events_url": "https://api.github.com/users/tongzhao315/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"If you want to use this model, you should build from source. It was merged to main on the 25th of January, the release does not include it. Next release will make it available. \r\nClosing as I can import on main. "
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
### System Info
Traceback (most recent call last):
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
ImportError: cannot import name 'BridgeTowerProcessor' from 'transformers' (lib/python3.7/site-(packages/transformers/__init__.py),could you fix this bug。i use transformers == 4.26.0(pip )
/cc @xianbaoqian
### Who can help?
@xianbaoqian
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
# forward pass
scores = dict()
for text in texts:
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0, 1].item()
### Expected behavior
fix this bug
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21837/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21836
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21836/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21836/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21836/events
|
https://github.com/huggingface/transformers/issues/21836
| 1,602,296,207
|
I_kwDOCUB6oc5fgRmP
| 21,836
|
HF's Flan-T5 implementation doesn't support Chinese or code despite being trained on it
|
{
"login": "michaelroyzen",
"id": 45830328,
"node_id": "MDQ6VXNlcjQ1ODMwMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/45830328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelroyzen",
"html_url": "https://github.com/michaelroyzen",
"followers_url": "https://api.github.com/users/michaelroyzen/followers",
"following_url": "https://api.github.com/users/michaelroyzen/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelroyzen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelroyzen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelroyzen/subscriptions",
"organizations_url": "https://api.github.com/users/michaelroyzen/orgs",
"repos_url": "https://api.github.com/users/michaelroyzen/repos",
"events_url": "https://api.github.com/users/michaelroyzen/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelroyzen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! Thanks for posting. The original tokenizer does not support chinese (it only supports 4 language I think) either. \r\nHere is a minimal reproducing script using the vocabulary path provided in the `t5_1_1_base.gin` that is used for all of the Flan T5 (according to github). \r\n```python \r\n>>> import seqio\r\n>>> vocabulary = seqio.SentencePieceVocabulary(\"gs://t5-data/vocabs/cc_all.32000.100extra/sentencepiece.model\")\r\n>>> vocabulary.tokenizer.encode(\"你好你好吗\")\r\n[3, 2]\r\n>>> vocabulary.tokenizer.decode(vocabulary.tokenizer.encode(\"你好你好吗\"))\r\n' ⁇ '\r\n```\r\nWe probably made a mistake in the `tags` of the model that should not include these. The paper does not mention anything else, and I tested with the mT5 tokenizer without avail. \r\nWill try too look a bit more into this. ",
"They probably did not release the multilingual finetune checkpoints : we only have token vocabulary of 32 000 instead of 250 000 used for mT5 their multilingual tokenizers.",
"Thanks for the clarification @ArthurZucker. But the paper did mention the model being fine-tuned on code -- but I don't see how that is possible if the model can't support newlines, brackets, or tabs as tokens.",
"Yes! The paper seems to mention multilingual and code, but I dug and could not reproduce anything... Again a minimal reproduction script: \r\n```python \r\n>>> import seqio\r\n>>> vocabulary = seqio.SentencePieceVocabulary(\"gs://t5-data/vocabs/cc_all.32000.100extra/sentencepiece.model\")\r\n>>> vocabulary.tokenizer.encode(\"if True:\\n\\tprint('Wow')\")\r\n[3, 99, 10998, 10, 2281, 599, 31, 518, 2381, 31, 61]\r\n>>> vocabulary.tokenizer.decode(vocabulary.tokenizer.encode(\"if True:\\n\\tprint('Wow')\"))\r\n\"if True: print('Wow')\"\r\n```\r\nI might not be using some special arguments but I am not familiar with the black box seqio 😅 ",
"Right -- this is exactly what I'm talking about. Is there any way to reach out to the authors and figure this out? Seems pretty important that core functionality isn't working. And in the meantime, perhaps the model card should remove mentions to multilingual capabilities that it can't actually support.",
"We merged a lot of PRs on the hub to fix this. Marking as resolved! Thanks for reporting",
"@ArthurZucker \r\nhi, I have the same problem with flan-t5 to segment Chinese. \r\nis there some thing I should do to resolve this? like upgrade pip package.\r\nor other things?",
"You should just read this issue 😉 nothing we can do about it unfortunately ",
"@ArthurZucker ok, I read that commit,and I get it, thank you"
] | 1,677
| 1,679
| 1,677
|
NONE
| null |
### System Info
transformers == 4.26.1
pytorch == 1.13.1
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xxl")
tokenizer.decode(tokenizer("你好你好吗").input_ids)
```
returns `<unk></s>`.
Similarly, the tokenizer can't encode curly braces (`{` or `}`) or `\n` or `\t`, making it useless for code. Is the tokenizer included with the model the right one?
### Expected behavior
The tokenizer should be able to encode Asian languages (including Chinese) as well as code. The model was trained on both according to the paper. Did you port the proper tokenizer from the T5x repo?
I would appreciate your help.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21836/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21835
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21835/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21835/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21835/events
|
https://github.com/huggingface/transformers/issues/21835
| 1,602,240,060
|
I_kwDOCUB6oc5fgD48
| 21,835
|
Troubleshooting AttributeError: 'Seq2SeqTimeSeriesPredictionOutput' object has no attribute 'sequences'
|
{
"login": "hassanshallal",
"id": 19214052,
"node_id": "MDQ6VXNlcjE5MjE0MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/19214052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hassanshallal",
"html_url": "https://github.com/hassanshallal",
"followers_url": "https://api.github.com/users/hassanshallal/followers",
"following_url": "https://api.github.com/users/hassanshallal/following{/other_user}",
"gists_url": "https://api.github.com/users/hassanshallal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hassanshallal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hassanshallal/subscriptions",
"organizations_url": "https://api.github.com/users/hassanshallal/orgs",
"repos_url": "https://api.github.com/users/hassanshallal/repos",
"events_url": "https://api.github.com/users/hassanshallal/events{/privacy}",
"received_events_url": "https://api.github.com/users/hassanshallal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey. I don't think you are using the correct version of transformers. I can't reproduce this and our documentation tests ensure that this works. Also see here for the `sequence` [output](https://github.com/ArthurZucker/transformers/blob/main/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py#L535) . As you can see, the `generate` method of `TimeSeriesTransformerForPrediction` return this class.",
"Hi Arthur, thank you for the response. I revisited the tutorial an =d it turned out I missed on a step. It all good now and this issue can be closed."
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
Hi, I am reporting this issue as recommended
Context: I am trying to setup/test transformers library, I managed to get the installation and followed the steps in this specific time series transformer tutorial: https://huggingface.co/docs/transformers/model_doc/time_series_transformer
I received an error when I tried to invoke:
`mean_prediction = outputs.sequences.mean(dim=1)`
The error: AttributeError: 'Seq2SeqTimeSeriesPredictionOutput' object has no attribute 'sequences'
by:
`print(outputs.keys())`
I got:
`odict_keys(['loss', 'params', 'encoder_last_hidden_state', 'scale', 'static_features'])`
I searched the documentations but can not find any match.
`import transformers as tfs
print(tfs.__version__)`
4.26.1
I would really appreciate some feedback in regards to this issue.
Thank you
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21835/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21834
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21834/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21834/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21834/events
|
https://github.com/huggingface/transformers/pull/21834
| 1,602,219,627
|
PR_kwDOCUB6oc5K4t_p
| 21,834
|
Fix tf random token masking probability in data collator
|
{
"login": "anruijian",
"id": 115125339,
"node_id": "U_kgDOBtysWw",
"avatar_url": "https://avatars.githubusercontent.com/u/115125339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anruijian",
"html_url": "https://github.com/anruijian",
"followers_url": "https://api.github.com/users/anruijian/followers",
"following_url": "https://api.github.com/users/anruijian/following{/other_user}",
"gists_url": "https://api.github.com/users/anruijian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anruijian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anruijian/subscriptions",
"organizations_url": "https://api.github.com/users/anruijian/orgs",
"repos_url": "https://api.github.com/users/anruijian/repos",
"events_url": "https://api.github.com/users/anruijian/events{/privacy}",
"received_events_url": "https://api.github.com/users/anruijian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ArthurZucker Do I need to merge it now?"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21834/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21834",
"html_url": "https://github.com/huggingface/transformers/pull/21834",
"diff_url": "https://github.com/huggingface/transformers/pull/21834.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21834.patch",
"merged_at": 1677588948000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21833
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21833/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21833/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21833/events
|
https://github.com/huggingface/transformers/pull/21833
| 1,602,086,684
|
PR_kwDOCUB6oc5K4R8p
| 21,833
|
Fixed gradient_checkpointing/use_cache bug in blenderbot
|
{
"login": "Batese2001",
"id": 69521504,
"node_id": "MDQ6VXNlcjY5NTIxNTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/69521504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Batese2001",
"html_url": "https://github.com/Batese2001",
"followers_url": "https://api.github.com/users/Batese2001/followers",
"following_url": "https://api.github.com/users/Batese2001/following{/other_user}",
"gists_url": "https://api.github.com/users/Batese2001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Batese2001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Batese2001/subscriptions",
"organizations_url": "https://api.github.com/users/Batese2001/orgs",
"repos_url": "https://api.github.com/users/Batese2001/repos",
"events_url": "https://api.github.com/users/Batese2001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Batese2001/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@younesbelkada Thanks for your help! It should be all good"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #21737 for blenderbot
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21833/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21833",
"html_url": "https://github.com/huggingface/transformers/pull/21833",
"diff_url": "https://github.com/huggingface/transformers/pull/21833.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21833.patch",
"merged_at": 1677944754000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21832
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21832/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21832/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21832/events
|
https://github.com/huggingface/transformers/issues/21832
| 1,602,053,267
|
I_kwDOCUB6oc5ffWST
| 21,832
|
Illegal memory access when using Trainer API on GPU with PyTorch 2.0's Inductor backend
|
{
"login": "Lokiiiiii",
"id": 36520926,
"node_id": "MDQ6VXNlcjM2NTIwOTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/36520926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lokiiiiii",
"html_url": "https://github.com/Lokiiiiii",
"followers_url": "https://api.github.com/users/Lokiiiiii/followers",
"following_url": "https://api.github.com/users/Lokiiiiii/following{/other_user}",
"gists_url": "https://api.github.com/users/Lokiiiiii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lokiiiiii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lokiiiiii/subscriptions",
"organizations_url": "https://api.github.com/users/Lokiiiiii/orgs",
"repos_url": "https://api.github.com/users/Lokiiiiii/repos",
"events_url": "https://api.github.com/users/Lokiiiiii/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lokiiiiii/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Pretty much the same as #21826 , inductor backend is not yet supported.",
"@ArthurZucker Can you elaborate on this ?\r\n\r\nDo you mean the Trainer class is using a piece of code that is unsupported by inductor backend ?\r\nIs this something that you will wait on PyTorch to fix or are you amenable to workarounds in the Trainer class for the inductor backend ?\r\n\r\nIs there already a root cause that boils down to a few pieces of code ?\r\n",
"Hi @Lokiiiiii We haven't investigated the issue yet, to make sure if it comes from a bug in PyTorch or in Transformers. Stay tuned!",
"I can reproduce this issue and have something smaller:\r\n```py\r\nimport torch\r\n\r\nimport transformers\r\nfrom transformers import AutoModelForMaskedLM, AutoTokenizer, DataCollatorForLanguageModeling\r\n\r\ndef main():\r\n model_name = \"bert-base-uncased\"\r\n tokenizer = AutoTokenizer.from_pretrained(model_name)\r\n model = AutoModelForMaskedLM.from_pretrained(model_name)\r\n \r\n texts = [\"This is a text for the example.\"] * 16\r\n tokenized_texts = tokenizer(texts, padding=\"max_length\", truncation=True, max_length=128, return_tensors=\"pt\") \r\n data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer)\r\n batch = tokenized_texts\r\n batch[\"input_ids\"], batch[\"labels\"] = data_collator.torch_mask_tokens(batch[\"input_ids\"])\r\n\r\n model = torch.compile(model, backend=\"inductor\")\r\n model.to(\"cuda\")\r\n\r\n batch = {k: v.to(\"cuda\") for k, v in batch.items()}\r\n outputs = model(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\nReaching out to the PyTorch folks as it is raised in the model forward, so not something in Transformers at first glance.",
"See https://github.com/pytorch/pytorch/issues/95794",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
NONE
| null |
### System Info
### System Information
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-1028-aws-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 2.0.0a0+git8693604 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes (A10G/A100)
- Using distributed or parallel set-up in script?: No
### More information
1. Training is successful in eager mode with batch size of 128
2. Training is successful in dynamo + eager mode with batch size of 128
3. Training is only able to succeed with dynamo + inductor with batch size of 2
4. [Dynamo benchmarks](https://github.com/pytorch/pytorch/blob/master/benchmarks/dynamo/huggingface.py) which use the same HF models without Trainer API are able to succeed.
### Error Signature
| 1677525140847 | Traceback (most recent call last): File "./run_mlm.py", line 694, in <module> |
| 1677525140847 | main() File "./run_mlm.py", line 635, in main |
| 1677525140847 | train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1631, in train |
| 1677525140847 | return inner_training_loop( File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1898, in _inner_training_loop |
| 1677525140848 | tr_loss_step = self.training_step(model, inputs) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2640, in training_step |
| 1677525140848 | loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2672, in compute_loss |
| 1677525140848 | outputs = model(**inputs) File "/opt/conda/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 82, in __call__ return self.dynamo_ctx(self._orig_mod.__call__)(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 215, in _fn |
| 1677525140848 | return fn(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl |
| 1677525140848 | return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 1324, in forward |
| 1677525140848 | @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) File "/opt/conda/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 215, in _fn |
| 1677525140848 | return fn(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 2816, in forward |
| 1677525140848 | return compiled_fn(full_args) File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 1222, in g |
| 1677525140848 | return f(*args) File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 2383, in debug_compiled_function |
| 1677525140848 | return compiled_function(*args) File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 1895, in runtime_wrapper |
| 1677525140848 | all_outs = call_func_with_args( File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 1256, in call_func_with_args |
| 1677525140848 | out = normalize_as_list(f(*args)) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/function.py", line 506, in apply |
| 1677525140848 | return super().apply(*args, **kwargs) # type: ignore[misc] File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 2148, in forward |
| 1677525140848 | fw_outs = call_func_with_args( File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 1247, in call_func_with_args |
| 1677525140848 | out = normalize_as_list(f(args)) File "/opt/conda/lib/python3.8/site-packages/torch/_inductor/compile_fx.py", line 248, in run |
| 1677525140848 | return model(new_inputs) File "/tmp/torchinductor_root/rj/crjch2m3bp6tuhd3s6n2apgbibxay4o6o5jlrfbwsfiokrv2rkep.py", line 4483, in call |
| 1677525140848 | triton__49.run(primals_208, buf513, buf514, 128, 128, grid=grid(128), stream=stream0) File "/opt/conda/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py", line 184, in run |
| 1677525140848 | self.autotune_to_one_config(*args, grid=grid) File "/opt/conda/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper |
| 1677525140848 | r = func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py", line 171, in autotune_to_one_config timings = { File "/opt/conda/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py", line 172, in <dictcomp> |
| 1677525140848 | launcher: self.bench(launcher, *cloned_args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py", line 153, in bench return do_bench(kernel_call, rep=40, fast_flush=True) File "/opt/conda/lib/python3.8/site-packages/triton/testing.py", line 144, in do_bench |
| 1677525140848 | torch.cuda.synchronize() File "/opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py", line 711, in synchronize |
| 1677525140848 | return torch._C._cuda_synchronize() |
| 1677525140848 | RuntimeError: CUDA error: an illegal memory access was encountered
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Training Command: ['python', './run_mlm.py', '--model_name_or_path', 'bert-base-uncased', '--output_dir', '/opt/ml/model', '--fp16', '--dataloader_drop_last', '--dataset_config_name', 'wikitext-2-raw-v1', '--dataset_name', 'wikitext', '--do_train', '--evaluation_strategy', 'no', '--logging_strategy', 'epoch', '--max_seq_length', '128', '--num_train_epochs', '50', '--overwrite_output_dir', '--per_device_train_batch_size', '128', '--save_strategy', 'no', '--torch_compile_backend', 'inductor']
### Expected behavior
No exceptions when using Inductor backend with the trainer API.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21832/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21831
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21831/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21831/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21831/events
|
https://github.com/huggingface/transformers/issues/21831
| 1,602,013,225
|
I_kwDOCUB6oc5ffMgp
| 21,831
|
HF pipeline throws error
|
{
"login": "sindhuvahinis",
"id": 56774226,
"node_id": "MDQ6VXNlcjU2Nzc0MjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/56774226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sindhuvahinis",
"html_url": "https://github.com/sindhuvahinis",
"followers_url": "https://api.github.com/users/sindhuvahinis/followers",
"following_url": "https://api.github.com/users/sindhuvahinis/following{/other_user}",
"gists_url": "https://api.github.com/users/sindhuvahinis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sindhuvahinis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sindhuvahinis/subscriptions",
"organizations_url": "https://api.github.com/users/sindhuvahinis/orgs",
"repos_url": "https://api.github.com/users/sindhuvahinis/repos",
"events_url": "https://api.github.com/users/sindhuvahinis/events{/privacy}",
"received_events_url": "https://api.github.com/users/sindhuvahinis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! Would you mind providing a minimal reproducing script? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
NONE
| null |
### System Info
HF Pipeline actually trying to generate the outputs on CPU despite including the device_map=auto as configuration for GPT_NeoX 20B model.
Workaround is to use model.generate method by manually converting the input_ids to GPU.
```
[INFO ] PyProcess - prediction = self.hf_pipeline(data, **parameters)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/text_generation.py", line 187, in __call__
[INFO ] PyProcess - return super().__call__(text_inputs, **kwargs)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 1063, in __call__
[INFO ] PyProcess - outputs = [output for output in final_iterator]
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 1063, in <listcomp>
[INFO ] PyProcess - outputs = [output for output in final_iterator]
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py", line 111, in __next__
[INFO ] PyProcess - item = next(self.iterator)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py", line 112, in __next__
[INFO ] PyProcess - processed = self.infer(item, **self.params)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 990, in forward
[INFO ] PyProcess - model_outputs = self._forward(model_inputs, **forward_params)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/text_generation.py", line 229, in _forward
[INFO ] PyProcess - generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
[INFO ] PyProcess - return func(*args, **kwargs)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 1422, in generate
[INFO ] PyProcess - return self.sample(
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 2049, in sample
[INFO ] PyProcess - next_token_scores = logits_warper(input_ids, next_token_scores)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/generation_logits_process.py", line 92, in __call__
[INFO ] PyProcess - scores = processor(input_ids, scores)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/generation_logits_process.py", line 233, in __call__
[INFO ] PyProcess - indices_to_remove = scores < torch.topk(scores, top_k)[0][..., -1, None]
[INFO ] PyProcess - RuntimeError: "topk_cpu" not implemented for 'Half'
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://github.com/deepjavalibrary/djl-serving/blob/master/engines/python/setup/djl_python/huggingface.py#L129 - This is the code we used to test.
### Expected behavior
This error is happening for only GPT Neox 20B https://huggingface.co/EleutherAI/gpt-neox-20b. It worked for Bloom 7B amd gptj models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21831/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21830
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21830/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21830/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21830/events
|
https://github.com/huggingface/transformers/pull/21830
| 1,601,897,685
|
PR_kwDOCUB6oc5K3pPF
| 21,830
|
Temporarily fix ONNX model exporting error
|
{
"login": "SatyaJandhyalaAtMS",
"id": 55203776,
"node_id": "MDQ6VXNlcjU1MjAzNzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/55203776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SatyaJandhyalaAtMS",
"html_url": "https://github.com/SatyaJandhyalaAtMS",
"followers_url": "https://api.github.com/users/SatyaJandhyalaAtMS/followers",
"following_url": "https://api.github.com/users/SatyaJandhyalaAtMS/following{/other_user}",
"gists_url": "https://api.github.com/users/SatyaJandhyalaAtMS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SatyaJandhyalaAtMS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SatyaJandhyalaAtMS/subscriptions",
"organizations_url": "https://api.github.com/users/SatyaJandhyalaAtMS/orgs",
"repos_url": "https://api.github.com/users/SatyaJandhyalaAtMS/repos",
"events_url": "https://api.github.com/users/SatyaJandhyalaAtMS/events{/privacy}",
"received_events_url": "https://api.github.com/users/SatyaJandhyalaAtMS/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @amyeroberts ",
"Thanks for opening this PR @SatyaJandhyalaAtMS ! \r\n\r\nCould you share a link to the issue this resolves? I'm getting a 404 error for the link in the commit / PR title: https://github.com/microsoft/onnx-converters-private/issues/143",
"The python code to reproduce the error is:\r\n```\r\nimport onnxruntime as ort\r\nfrom transformers import AutoImageProcessor, AutoModelForImageClassification\r\nfrom PIL import Image\r\nimport requests\r\nimport numpy as np\r\nimport torch.onnx\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\nprocessor = AutoImageProcessor.from_pretrained(\"microsoft/swinv2-tiny-patch4-window8-256\")\r\nmodel = AutoModelForImageClassification.from_pretrained(\"microsoft/swinv2-tiny-patch4-window8-256\")\r\ninputs = processor(images=image, return_tensors=\"pt\")\r\noutputs = model(**inputs)\r\nlogits = outputs.logits\r\n# model predicts one of the 1000 ImageNet classes\r\npredicted_class_idx = logits.argmax(-1).item()\r\noptions = ort.SessionOptions()\r\n# options.log_severity_level = 0\r\ntorch.onnx.export(model, inputs['pixel_values'],\"swinv2.onnx\", export_params=True, opset_version=11, do_constant_folding=True, input_names=[\"input\"], output_names=[\"output\"])\r\nort_sess = ort.InferenceSession(\"swinv2.onnx\", providers=[\"CUDAExecutionProvider\"], sess_options=options)\r\nort_outputs=ort_sess.run(None, {\"input\":inputs['pixel_values'].numpy()})\r\nort_prediction=int(np.argmax(np.array(ort_outputs[0]).squeeze(), axis=0))\r\nif ort_prediction == predicted_class_idx:\r\n print(\"Test passed\")\r\nelse:\r\n print(\"Test failed\")\r\n```\r\nThe error is as follows:\r\n\r\nTraceback (most recent call last):\r\n File \"test_with_ort.py\", line 19, in <module>\r\n torch.onnx.export(model, inputs['pixel_values'],\"swinv2.onnx\", export_params=True, opset_version=11, do_constant_folding=True, input_names=[\"input\"], output_names=[\"output\"])\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/onnx/utils.py\", line 504, in export\r\n _export(\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/onnx/utils.py\", line 1529, in _export\r\n graph, params_dict, torch_out = _model_to_graph(\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/onnx/utils.py\", line 1111, in _model_to_graph\r\n graph, params, torch_out, module = _create_jit_graph(model, args)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/onnx/utils.py\", line 987, in _create_jit_graph\r\n graph, torch_out = _trace_and_get_graph_from_model(model, args)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/onnx/utils.py\", line 891, in _trace_and_get_graph_from_model\r\n trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/jit/_trace.py\", line 1184, in _get_trace_graph\r\n outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/jit/_trace.py\", line 127, in forward\r\n graph, out = torch._C._create_graph_by_tracing(\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/jit/_trace.py\", line 118, in wrapper\r\n outs.append(self.inner(*trace_inputs))\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1182, in _slow_forward\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py\", line 1274, in forward\r\n outputs = self.swinv2(\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1182, in _slow_forward\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py\", line 1078, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1182, in _slow_forward\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py\", line 907, in forward\r\n layer_outputs = layer_module(\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1182, in _slow_forward\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py\", line 821, in forward\r\n layer_outputs = layer_module(\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1182, in _slow_forward\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py\", line 723, in forward\r\n self.set_shift_and_window_size(input_dimensions)\r\n File \"/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py\", line 670, in set_shift_and_window_size\r\n if input_resolution\r\n**TypeError: '<=' not supported between instances of 'tuple' and 'Tensor'**",
"My Python environment is \r\nonnxruntime 1.14.1\r\ntorch 1.13.1+cu116\r\ntorchaudio 0.13.1+cu116\r\ntorchvision 0.14.1+cu116\r\ntransformers 4.26.1\r\nMy platform is\r\nUbuntu 22.04",
"Thanks for updating @SatyaJandhyalaAtMS ! \r\n\r\nThe tests are currently failing on the code quality checks. As the lines of code that have been modified are in a class with a `# Copied from` header, the original code source the comment points to will need to be updated i.e. the equivalent line in `transformers.models.swin.modeling_swin.SwinOutput`. \r\n\r\nThen run `make fix-copies` to propogate the change across the repo. \r\n\r\nTo get the other styling tests pass, run `make style` to have the code formatted in the expected style\r\n\r\n"
] | 1,677
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
…issues/143
# What does this PR do?
Fix the following error while trying to export ONNX model
**TypeError: '<=' not supported between instances of 'tuple' and 'Tensor'**
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21830/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21830",
"html_url": "https://github.com/huggingface/transformers/pull/21830",
"diff_url": "https://github.com/huggingface/transformers/pull/21830.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21830.patch",
"merged_at": 1678978587000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21829
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21829/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21829/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21829/events
|
https://github.com/huggingface/transformers/pull/21829
| 1,601,823,526
|
PR_kwDOCUB6oc5K3Zbh
| 21,829
|
Add: task guide for zero shot object detection
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Images are in this PR https://huggingface.co/datasets/huggingface/documentation-images/discussions/49",
"_The documentation is not available anymore as the PR was closed or merged._",
"> Nice work, thanks for adding this! I especially like the brief intro to the OWL-ViT model. Maybe we can embed one of the OWL-ViT demos (like this [one](https://huggingface.co/spaces/adirik/OWL-ViT)) directly on the page so users can play with it?\r\n\r\nThanks for the suggestion! I embedded demo :)"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR adds a task guide for zero-shot object detection. Unlike other task guides, there is no fine-tuning and complex preprocessing of custom data. However, the task illustrates different ways of inferencing with OWL-ViT, such as using the pipeline, manual inference with text queries, manual inference for a batch of examples, and image-guided object detection.
The task guide is based on [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) with some additions (e.g. pipeline example) and some modifications.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21829/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21829/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21829",
"html_url": "https://github.com/huggingface/transformers/pull/21829",
"diff_url": "https://github.com/huggingface/transformers/pull/21829.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21829.patch",
"merged_at": 1677597788000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21828
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21828/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21828/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21828/events
|
https://github.com/huggingface/transformers/pull/21828
| 1,601,736,525
|
PR_kwDOCUB6oc5K3HXF
| 21,828
|
Fix quality with `ruff==0.0.253`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21828). All of your documentation changes will be reflected on that endpoint."
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Fix quality with `ruff==0.0.253`.
Merged to avoid CI failing due to the new version of `ruff==0.0.253`.
- The change with this new version is valid so I decide to go with it instead of pining older version.
- The change also works with previous `ruff` versions (`0.0.252` and `0.0.243`), so contributors don't need to upgrade their ruff.
cc @sgugger for comments if any.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21828/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21828",
"html_url": "https://github.com/huggingface/transformers/pull/21828",
"diff_url": "https://github.com/huggingface/transformers/pull/21828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21828.patch",
"merged_at": 1677523124000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21827
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21827/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21827/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21827/events
|
https://github.com/huggingface/transformers/pull/21827
| 1,601,736,420
|
PR_kwDOCUB6oc5K3HVr
| 21,827
|
Add: task guide for zero shot object detection
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR adds a task guide for zero-shot object detection. Unlike other task guides, there is no fine-tuning and complex preprocessing of custom data. However, the task illustrates different ways of inferencing with OWL-ViT, such as using the pipeline, manual inference with text queries, manual inference for a batch of examples, and image-guided object detection.
The task guide is based on [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) with some additions (e.g. pipeline example) and some modifications.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21827/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21827",
"html_url": "https://github.com/huggingface/transformers/pull/21827",
"diff_url": "https://github.com/huggingface/transformers/pull/21827.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21827.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21826
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21826/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21826/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21826/events
|
https://github.com/huggingface/transformers/issues/21826
| 1,601,725,305
|
I_kwDOCUB6oc5feGN5
| 21,826
|
Faketensor issue when using torch inductor as backend with Trainer API
|
{
"login": "YuchengT",
"id": 109311506,
"node_id": "U_kgDOBoP2Eg",
"avatar_url": "https://avatars.githubusercontent.com/u/109311506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YuchengT",
"html_url": "https://github.com/YuchengT",
"followers_url": "https://api.github.com/users/YuchengT/followers",
"following_url": "https://api.github.com/users/YuchengT/following{/other_user}",
"gists_url": "https://api.github.com/users/YuchengT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YuchengT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YuchengT/subscriptions",
"organizations_url": "https://api.github.com/users/YuchengT/orgs",
"repos_url": "https://api.github.com/users/YuchengT/repos",
"events_url": "https://api.github.com/users/YuchengT/events{/privacy}",
"received_events_url": "https://api.github.com/users/YuchengT/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Diving into this issue, the forward pass of a ViT model (or also ResNet) segfaults on my side (I don't have your error message, but you are using multiple GPUs with DataParallel if I read the traceback correctly, which is probably not supported). I'll reach out to the PyTorch team.",
"Ok, the segmentation fault actually came from a mix of torchvision stable and torch nightlies. With \r\n```\r\npip3 install --pre torch torchvision --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu117\r\n```\r\nI don't get the segfaults when running the forward passes and I can run the example on one GPU with torch inductor.",
"Thanks for checking in. I just tried to restrict the use to a single GPU, the original Faketensor issue is gone and training goes normal.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.15.0-1030-aws-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 2.0.0a0+git45d775c (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the error:
```
pip3 install numpy --pre torch --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu117
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e .
cd examples/pytorch/image-classification
pip install -r requirements.txt
python run_image_classification.py \
--dataset_name food101 --output_dir ./food101_outputs/ \
--remove_unused_columns False --do_train --learning_rate 2e-5 \
--num_train_epochs 1 --report_to none --per_device_train_batch_size 1 \
--logging_strategy steps --logging_steps 10 --save_strategy epoch \
--overwrite_output_dir --torch_compile_backend inductor
```
### Expected behavior
In the forward pass we saw
```
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker
output = module(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 82, in __call__
return self.dynamo_ctx(self._orig_mod.__call__)(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 215, in _fn
return fn(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 343, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1862, in run
super().run()
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 619, in run
and self.step()
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 583, in step
getattr(self, inst.opname)(inst)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1119, in STORE_ATTR
self.output.compile_subgraph(
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 579, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 626, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 713, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised Exception: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in aten.convolution.default(*(FakeTensor(FakeTensor(..., device='meta', size=(1, 3, 224, 224)), cuda:0), tensor([[[[ 1.5585e-02, 5.1153e-02, 5.5507e-02, ..., 8.7095e-02,
1.0724e-01, 1.1972e-01],
[ 1.2528e-02, 1.3805e-02, 1.9514e-02, ..., 8.3470e-02,
3.8538e-02, 8.5673e-02],
[ 1.6835e-02, 1.1395e-02, -4.2227e-03, ..., 7.6267e-02,
3.2913e-02, 3.5811e-02],
...,
[-1.3910e-02, -5.2311e-03, -5.4728e-02, ..., -5.9774e-02,
-6.8372e-02, 1.9745e-02],
[ 3.9740e-02, -4.4819e-02, -2.2511e-02, ..., 1.0378e-02,
-4.0710e-02, 8.1610e-02],
[ 6.9672e-02, 8.1102e-02, 3.7697e-02, ..., 9.5118e-02,
1.1510e-01, 1.5316e-01]],
[[-6.2993e-02, -3.8614e-02, -4.2429e-02, ..., -5.9603e-02,
-2.5599e-02, -2.1458e-02],
[-8.3695e-03, -1.0892e-02, -8.3388e-03, ..., -3.2603e-02,
-4.5211e-02, -3.2238e-04],
[ 4.6061e-02, 3.2819e-02, 1.3283e-02, ..., -3.5023e-02,
-3.9311e-02, -3.1661e-02],
...,
[ 3.6763e-02, 2.0482e-02, -5.9279e-02, ..., -2.3277e-02,
-3.5168e-02, 4.1823e-02],
[ 5.9616e-02, -4.6340e-02, -5.6928e-02, ..., 2.1146e-02,
-2.8068e-02, 8.1620e-02],
[ 3.2009e-02, 1.2772e-02, -5.2345e-02, ..., 4.5677e-02,
6.9799e-02, 9.9707e-02]],
[[-1.1234e-01, -3.7040e-02, 2.3741e-03, ..., -2.5491e-02,
-2.3041e-02, -4.9780e-02],
[-8.1879e-02, -1.8625e-02, 2.3212e-02, ..., 2.8488e-02,
-8.9298e-03, -8.8660e-03],
[-5.3666e-02, -8.0642e-03, 3.1249e-04, ..., 2.4810e-02,
1.9670e-03, -2.3820e-02],
...,
[-2.9322e-02, 2.7237e-02, 7.6372e-03, ..., -5.7884e-03,
-4.6314e-02, -2.3849e-02],
[-1.5772e-02, -6.9045e-02, -3.1001e-02, ..., -1.5394e-02,
-9.2838e-02, -1.6544e-02],
[-6.0532e-02, -5.9463e-02, -1.1458e-01, ..., -8.0818e-02,
-5.7851e-02, -3.6628e-02]]],
[[[-2.9805e-02, 2.8396e-02, 2.7912e-02, ..., 4.0566e-03,
-3.9771e-02, -3.0932e-02],
[-3.8447e-02, 7.3079e-02, 8.8570e-02, ..., 5.9645e-02,
7.1729e-02, -2.3781e-02],
[-6.7763e-02, -9.5788e-03, 2.2610e-02, ..., 6.9609e-02,
6.6766e-02, 2.1204e-02],
...,
[-9.7874e-02, 3.7861e-02, 1.4769e-01, ..., 4.2415e-02,
2.4433e-02, -1.9694e-02],
[-1.0486e-01, 9.1625e-02, 6.1902e-02, ..., 5.4587e-02,
6.1803e-02, -7.3962e-02],
[-8.3595e-02, -3.1282e-02, 4.5430e-03, ..., 1.0579e-01,
9.3442e-03, -3.1741e-02]],
[[-1.3991e-02, 3.7737e-02, 2.1331e-02, ..., -5.3170e-04,
-4.6416e-02, -3.0628e-02],
[-2.4599e-02, 7.4020e-02, 8.8880e-02, ..., 6.2086e-03,
2.5795e-02, -5.7271e-02],
[-3.8102e-02, 1.2179e-02, 4.9777e-02, ..., -1.9636e-03,
-7.1444e-03, -4.5585e-02],
...,
[-1.0808e-01, 1.5686e-02, 1.1100e-01, ..., 5.2311e-02,
4.0869e-02, 2.2863e-02],
[-9.2463e-02, 1.0301e-01, 7.3106e-02, ..., 5.4267e-02,
7.1780e-02, -5.0014e-02],
[-5.0767e-02, 1.1884e-02, 5.5425e-02, ..., 1.0279e-01,
1.1630e-02, -2.9744e-02]],
[[-1.2795e-02, 2.0818e-02, -1.5076e-02, ..., 5.3417e-02,
3.3919e-02, 8.3990e-02],
[-3.6925e-02, 1.5391e-02, -2.9710e-03, ..., 1.6262e-02,
4.1795e-02, 2.0504e-02],
[-6.3379e-02, -4.8387e-02, -3.3549e-02, ..., 1.4104e-02,
4.7213e-03, 1.5434e-02],
...,
[-5.8642e-02, 1.2647e-02, 7.3773e-02, ..., -3.3242e-02,
2.6407e-03, 4.4312e-02],
[-5.1496e-02, 1.1767e-01, 5.7953e-02, ..., -2.6282e-02,
3.3894e-02, -3.7081e-02],
[-3.9964e-02, 3.5065e-02, 7.8275e-02, ..., 2.7511e-02,
-3.0828e-02, -4.1590e-02]]],
[[[-1.5722e-02, -4.7355e-04, -1.0641e-02, ..., 1.7975e-03,
8.2088e-03, 2.2358e-03],
[ 1.3006e-02, 2.2377e-02, 4.6318e-03, ..., -8.6258e-03,
-9.6003e-03, -1.8025e-02],
[ 1.1088e-02, 2.8006e-02, 1.0182e-02, ..., -1.2203e-02,
-1.4415e-02, -2.4993e-02],
...,
[ 1.8237e-02, 1.0154e-02, 4.7651e-03, ..., -3.4567e-03,
2.9223e-03, 1.2099e-02],
[ 2.3695e-02, 5.8175e-03, 6.3596e-03, ..., -5.2218e-03,
-2.5360e-04, 2.2794e-02],
[ 6.2578e-03, -4.3371e-03, -1.8502e-02, ..., -1.2459e-02,
5.3634e-03, 2.5850e-02]],
[[-1.6234e-02, 8.7551e-03, 6.5956e-03, ..., 3.4186e-02,
4.4762e-02, 4.3195e-02],
[-8.9584e-04, 2.0318e-02, 8.5710e-03, ..., 1.4135e-02,
1.1658e-02, 8.4339e-03],
[-9.6351e-03, 2.2278e-02, 8.8168e-03, ..., 3.5157e-03,
3.5562e-03, -5.8602e-03],
...,
[ 2.7858e-03, 2.6561e-03, 1.9480e-03, ..., -2.0423e-03,
1.7712e-03, 9.1982e-03],
[-3.3212e-03, -5.4636e-03, -2.3775e-03, ..., -4.5449e-03,
-1.1336e-03, 1.2337e-02],
[-2.9789e-02, -2.6257e-02, -2.9101e-02, ..., -1.4381e-02,
-8.4865e-04, 9.4145e-03]],
[[-3.9820e-02, -2.2362e-03, -7.1783e-03, ..., -6.4696e-03,
-9.6364e-04, -8.4148e-03],
[-1.7378e-02, 1.6933e-02, 7.9727e-03, ..., -1.1679e-03,
-6.7231e-03, -1.6826e-02],
[-2.1956e-02, 1.7214e-02, 1.0109e-02, ..., 5.0059e-03,
-2.2617e-03, -1.9757e-02],
...,
[ 1.0005e-02, 1.8804e-02, 1.7285e-02, ..., 1.2554e-02,
8.4794e-03, -1.2349e-03],
[ 3.7699e-03, 1.2804e-02, 2.3074e-02, ..., 8.5308e-03,
4.7621e-03, 3.3861e-03],
[-1.5654e-02, 1.4163e-03, 1.6649e-03, ..., 1.0598e-03,
5.9754e-03, 1.9494e-03]]],
...,
[[[-4.0359e-02, 1.7736e-02, 6.0642e-02, ..., -3.0918e-02,
-4.0575e-02, 8.9583e-03],
[-4.5315e-02, 2.3188e-02, 7.3064e-02, ..., -4.0640e-02,
-3.9918e-02, -1.0623e-02],
[-2.8732e-02, 3.8204e-03, 6.9796e-02, ..., -3.4637e-02,
-4.3759e-02, -2.8479e-02],
...,
[ 3.1028e-03, -1.0449e-02, 1.7222e-02, ..., 1.2165e-01,
4.6210e-02, -4.2192e-02],
[ 7.6139e-03, 2.3000e-02, 9.2082e-03, ..., 4.5240e-02,
2.7699e-02, -5.6189e-02],
[ 2.3990e-02, -1.4030e-02, -1.0267e-02, ..., -5.2204e-02,
-6.0534e-02, -7.1400e-02]],
[[-1.1183e-01, -4.2477e-02, 6.3755e-03, ..., -1.0254e-01,
-1.1569e-01, -6.7280e-02],
[-9.4214e-02, -8.5278e-03, 5.2520e-02, ..., -6.8358e-02,
-7.9058e-02, -5.9130e-02],
[-7.1334e-02, -1.6083e-02, 7.3228e-02, ..., -2.8043e-02,
-5.2757e-02, -4.9763e-02],
...,
[ 2.0692e-02, 7.6735e-03, 3.0739e-02, ..., 1.7656e-01,
8.8030e-02, -1.7911e-02],
[ 2.6301e-02, 4.7867e-02, 2.6775e-02, ..., 8.4734e-02,
6.1141e-02, -3.5701e-02],
[ 2.9517e-02, 3.1440e-04, 2.3788e-03, ..., -2.9185e-02,
-4.0404e-02, -5.2867e-02]],
[[-6.7530e-02, -2.4295e-02, -5.6700e-03, ..., -6.6664e-02,
-5.8257e-02, -1.5316e-02],
[-4.9842e-02, -1.2484e-03, 2.4207e-02, ..., -5.5344e-02,
-4.6049e-02, -1.7707e-02],
[-3.7976e-02, -1.2565e-02, 3.6258e-02, ..., -3.3223e-02,
-4.0654e-02, -1.9860e-02],
...,
[ 2.3920e-04, -1.2323e-02, -2.5662e-03, ..., 6.3977e-02,
2.5765e-02, -2.7954e-02],
[ 5.6197e-03, 2.9101e-02, -2.5730e-03, ..., 1.9924e-02,
2.6986e-02, -2.9991e-02],
[ 9.6282e-03, -3.3303e-03, 4.9659e-04, ..., -2.7544e-02,
-3.6161e-02, -4.3468e-02]]],
[[[ 6.2803e-02, 7.9755e-02, 8.1322e-02, ..., -3.7440e-03,
4.9733e-03, -3.9191e-02],
[-4.2880e-02, -4.4536e-02, -4.7230e-02, ..., -7.2959e-02,
-7.5629e-02, -6.4150e-02],
[-6.8180e-02, -5.6262e-02, -6.1507e-02, ..., -4.4115e-02,
-3.9107e-02, -6.6999e-02],
...,
[ 1.7710e-02, 8.7420e-02, 3.3490e-02, ..., 1.2845e-02,
4.8843e-02, 2.8497e-02],
[ 3.1498e-02, 2.7061e-02, 7.8619e-03, ..., 6.8658e-02,
5.4993e-02, 6.3310e-02],
[-9.4789e-02, -6.8985e-02, -1.4324e-01, ..., -6.7484e-03,
4.5337e-02, 2.0077e-02]],
[[ 8.4717e-02, 5.7770e-02, 3.7002e-02, ..., -1.3588e-02,
-7.0228e-03, -6.0760e-02],
[-1.5979e-02, -6.4630e-02, -9.1545e-02, ..., -9.8412e-02,
-9.4532e-02, -9.2499e-02],
[-2.8392e-02, -6.5403e-02, -1.0047e-01, ..., -8.3756e-02,
-6.3615e-02, -9.5850e-02],
...,
[ 5.9997e-02, 1.0705e-01, 4.9321e-02, ..., 3.5627e-02,
5.4496e-02, -1.2566e-02],
[ 6.3830e-02, 2.6649e-02, 7.0561e-03, ..., 9.2897e-02,
5.9108e-02, 2.4363e-02],
[-4.8049e-02, -5.9966e-02, -1.3561e-01, ..., 2.1068e-02,
5.2889e-02, 1.5655e-03]],
[[ 6.2001e-02, 2.0778e-02, 1.1975e-02, ..., -2.0197e-03,
6.9852e-03, -4.6863e-02],
[ 1.3178e-02, -3.5991e-02, -5.1943e-02, ..., -2.9544e-02,
-2.5600e-02, -3.3762e-02],
[ 2.7143e-02, -2.0521e-02, -6.2057e-02, ..., -1.6369e-02,
1.0089e-02, -2.7409e-02],
...,
[ 5.1592e-02, 8.0635e-02, 4.0372e-02, ..., -1.1472e-02,
-1.3918e-02, -9.3905e-02],
[ 8.2424e-02, 2.7950e-02, 3.8630e-02, ..., 2.2609e-02,
-1.9679e-02, -5.3972e-02],
[ 5.0708e-02, 1.6264e-02, -4.1324e-02, ..., -2.9146e-02,
-2.8712e-03, -4.6670e-02]]],
[[[-1.3285e-02, -1.1488e-03, 3.0550e-03, ..., -9.4483e-03,
-8.8926e-03, -9.0441e-04],
[ 2.3043e-03, -4.1523e-03, -5.2203e-03, ..., 9.1216e-05,
-9.0951e-03, 2.5220e-03],
[-6.0988e-03, -1.1074e-02, -5.9025e-03, ..., 1.6161e-03,
-8.8638e-03, -1.3972e-03],
...,
[-5.4505e-03, -4.5738e-03, -1.0316e-03, ..., 3.4947e-04,
-6.1689e-03, -4.7928e-03],
[ 2.6408e-03, -2.8769e-03, -5.0605e-03, ..., 1.3172e-04,
-1.3570e-03, -2.5045e-03],
[-5.3083e-04, -9.0542e-03, -7.9351e-03, ..., 4.1531e-03,
-2.9358e-03, -7.6401e-03]],
[[ 7.8418e-03, 1.8411e-02, 2.0441e-02, ..., 5.7535e-03,
8.6325e-03, 1.8935e-02],
[ 1.7873e-02, 8.4761e-03, 7.8354e-03, ..., 8.8990e-03,
5.7851e-03, 1.6686e-02],
[ 6.2208e-03, -3.4983e-03, 1.7564e-03, ..., 4.4534e-03,
-3.9801e-03, 4.1031e-03],
...,
[ 2.9908e-03, 3.0588e-03, 1.2445e-03, ..., 4.4925e-04,
1.0658e-03, 7.1672e-03],
[ 1.6074e-02, 8.3498e-03, 5.7661e-03, ..., 9.4217e-03,
1.0780e-02, 1.6278e-02],
[ 4.4408e-03, 1.5656e-03, -2.5045e-04, ..., 9.6282e-03,
6.9736e-03, 6.3294e-03]],
[[-7.2360e-03, -1.6965e-03, 3.0952e-03, ..., 1.5861e-03,
-6.5436e-03, 4.5501e-03],
[ 2.8352e-03, -9.9491e-03, -6.2758e-03, ..., -2.5671e-03,
-1.2278e-02, -1.3398e-03],
[ 1.9139e-04, -8.5651e-03, -1.9316e-03, ..., 2.4267e-04,
-1.0509e-02, -5.0672e-03],
...,
[-5.4899e-03, -6.1185e-03, 1.0800e-03, ..., 3.4343e-03,
-4.8832e-03, 7.7482e-04],
[-6.7859e-03, -1.3454e-02, -8.1208e-03, ..., 5.4349e-04,
-7.4328e-03, 3.7456e-04],
[-1.5332e-02, -1.7777e-02, -1.3543e-02, ..., -1.2068e-03,
-1.5574e-02, -8.6474e-03]]]], device='cuda:0',
grad_fn=<BroadcastBackward>), tensor([-1.6090e-02, 1.2174e-02, 2.2797e-01, 4.6908e-02, -1.5499e-01,
8.7547e-02, 1.3502e-01, 1.2800e-02, 8.2932e-02, -4.2087e-01,
3.8552e-03, -2.6860e-02, 1.4235e-02, -5.7877e-03, -3.6805e-02,
-3.5540e-02, 5.5290e-03, 9.2909e-02, 1.2771e-02, -3.0965e-02,
-6.3441e-02, -8.4934e-04, -5.3447e-03, -2.5515e-02, -4.1445e-03,
2.5515e-02, 1.8479e-02, -2.5615e-02, 4.0568e-02, -2.0309e-02,
-2.8299e-03, -5.2244e-03, -2.2531e-02, -1.0226e-03, -1.7576e-02,
1.1028e-03, -2.5013e-02, -3.2375e-02, 1.4297e-02, 1.2362e-02,
-3.5994e-02, 4.0352e-02, -3.7467e-02, -5.9556e-03, -4.2830e-01,
3.5867e-01, 1.1702e-02, -1.1946e-02, 1.0084e-01, -1.4606e-02,
-1.0271e-02, 4.5181e-01, 5.3849e-03, -1.2856e-02, 1.4235e-03,
-1.7222e-03, -2.2668e-02, -2.2627e-03, 3.5256e-02, -1.5487e-01,
-3.0596e-02, 1.7638e-02, -1.6498e-02, -2.2896e-03, 1.1674e-01,
1.4380e-02, 7.1714e-02, 6.4757e-03, 1.4729e-02, 1.0424e-01,
2.1318e-02, 7.3172e-01, -2.7957e-03, 1.0743e-02, -5.3283e-01,
3.7074e-03, -8.4154e-03, -1.1694e-02, -1.3869e-02, -1.8652e-02,
2.4801e-02, 3.8952e-03, 3.9827e-02, 1.4166e-02, -7.6014e-01,
4.1432e-01, -1.9246e-01, 1.1102e-01, -1.4642e-02, 5.7801e-03,
2.8745e-03, -1.7657e-02, -7.7463e-02, -6.4326e-02, 1.2057e-02,
-1.1613e-02, -4.0119e-02, -1.8979e-02, -2.7809e-02, -2.3785e-02,
2.2651e-02, -2.4268e-02, 5.2309e-03, 1.9109e-02, -3.7954e-03,
-1.9735e-02, -2.8117e-02, -5.8240e-02, -1.0201e-01, 6.7421e-04,
4.4703e-02, -3.0308e-04, -3.6796e-02, 2.3805e-04, 3.2304e-01,
-3.5212e-01, -4.4832e-02, 5.7270e-03, 5.8176e-03, -1.7689e-02,
5.5221e-03, -6.2864e-03, 2.8066e-02, 3.8595e-02, -1.3975e-02,
-2.1706e-02, 1.7213e-02, -1.1424e-02, 1.0731e-02, -5.9880e-04,
2.5505e-02, -7.9028e-03, -3.2949e-02, 3.8501e-03, -8.8245e-03,
-3.4819e-03, -1.4605e-02, -9.3169e-03, -6.8412e-02, 9.3665e-03,
-3.4788e-03, 6.8371e-03, 6.4590e-03, -6.6017e-02, 8.7025e-01,
-4.8855e-02, 2.0873e-02, 3.9021e-04, 9.8548e-03, 5.1253e-03,
1.0060e-02, -5.7132e-02, -2.5164e-03, 7.5240e-01, -6.7990e-03,
-4.7859e-01, -6.1399e-03, -1.1962e-03, 1.1866e-03, 3.5237e-03,
4.2073e-02, 5.0811e-03, 3.5187e-02, 7.6161e-03, -1.8199e-02,
-2.5168e-02, 1.0928e-02, -1.8055e-02, -8.7963e-03, 1.2136e-02,
2.9134e-02, -4.7957e-02, 1.9883e-03, -2.1588e-02, -1.2183e-03,
-5.0964e-02, -1.0120e-02, -5.1278e-03, 9.2718e-03, -1.0521e-02,
-9.1500e-01, -6.9778e-03, -1.1852e-05, -2.7351e-02, -6.3665e-03,
2.6623e-02, -1.0697e-02, 1.3013e+00, -4.2087e-03, 3.5010e-03,
-9.0631e-03, -1.2495e-02, -1.1143e-02, -6.4557e-02, -7.4548e-03,
2.5077e-02, 6.0618e-03, 3.8729e-02, 1.9512e-03, -1.5695e-02,
4.1066e-02, 7.1876e-02, 1.1941e-02, 1.1479e-02, -2.1025e-02,
-4.4586e-02, -1.2142e-01, 3.6297e-01, -1.1479e-02, -7.0550e-03,
-3.0732e-03, -1.0475e-01, -3.1307e-02, 2.6413e-02, 1.8010e-04,
-2.9713e-03, 6.5173e-03, -9.2936e-03, -2.7098e-02, 3.4719e-02,
-5.4723e-02, -4.5547e-02, -2.4742e-02, -3.0855e-03, 8.1147e-03,
1.3071e-02, 6.4248e-02, 8.1228e-04, 2.4757e-02, 1.4534e-02,
2.5024e-02, 8.5607e-02, 6.8466e-03, 5.1933e-03, 8.6611e-02,
-4.6842e-01, -5.6616e-02, 1.6048e-01, 6.1547e-03, 2.4924e-02,
-3.3897e-02, 7.5299e-01, -2.8823e-02, 5.1894e-03, 1.5493e-02,
-2.8037e-02, -3.5234e-02, -6.3110e-01, 2.6496e-02, -2.0996e-03,
8.1190e-01, 5.2454e-01, -9.2737e-03, 1.5295e-02, 7.9412e-03,
-4.2548e-02, -9.7852e-03, -2.3154e-02, -9.9202e-03, 8.1742e-03,
2.4342e-05, -8.2740e-03, -5.5718e-03, 6.1778e-01, -2.5267e-02,
1.5646e-02, -4.1497e-02, -1.9631e-02, 8.7076e-02, 8.0120e-03,
-2.8781e-03, -5.4196e-02, -5.0515e-01, -3.8567e-03, -5.6713e-03,
-8.2977e-03, 5.7573e-03, -3.0785e-02, -8.8448e-03, 5.0190e-01,
2.7427e-02, -2.2315e-02, 3.7897e-03, 5.3910e-03, -1.7446e-02,
-6.7904e-02, -6.4136e-02, 3.0637e-02, 1.7055e-02, -4.0486e-03,
4.0048e-04, 8.4181e-03, -2.9910e-03, -1.7335e-02, 2.8932e-02,
3.4725e-02, -3.8263e-02, -6.6158e-03, 2.3055e-02, 3.3611e-03,
-4.1740e-02, 5.7093e-01, 1.0084e-02, -1.0451e-02, 5.0783e-02,
-1.5320e-02, -1.3032e-02, 3.7760e-02, 1.7139e-02, 5.1414e-02,
-1.8133e-02, 4.9087e-03, -1.3765e-01, 1.8895e-02, 1.6932e-02,
2.7277e-03, 1.8141e-02, -6.8402e-03, -1.1778e-01, 1.0169e-02,
1.6853e-01, -1.3144e-04, 4.2427e-02, -2.6230e-02, -2.1131e-02,
4.9175e-02, -1.0456e-02, -2.5849e-02, 4.4606e-02, -1.7229e-05,
-1.1238e-02, 1.5116e-03, 1.2077e-02, 2.6054e-03, 6.9948e-02,
1.5055e-02, -1.9745e-02, 1.9165e-03, -4.6319e-02, -3.2234e-02,
-1.7079e-01, -3.5179e-02, -7.9092e-02, 1.5927e-02, -7.5581e-03,
3.9424e-01, 4.5398e-02, 1.0123e-03, -3.0889e-02, 8.9127e-03,
1.3117e-04, -1.2365e-02, 1.4814e-01, -2.2632e-02, -1.2881e-02,
-3.4497e-02, 8.9060e-03, -3.0747e-01, 1.1530e-02, -5.1192e-03,
2.6323e-02, 3.6447e-03, -9.0179e-02, -1.0916e-03, 1.1193e-02,
-1.7883e-02, -1.8083e-02, 4.2735e-03, -8.7878e-03, 7.8851e-01,
-8.4941e-03, -3.0741e-02, 2.3667e-02, 1.3072e-02, 4.4558e-02,
-5.2301e-03, 8.5091e-05, -2.0978e-02, 5.6533e-03, -4.4536e-02,
-2.2528e-02, 2.7267e-03, 2.7645e-01, -3.3284e-02, -1.3758e-01,
-3.8739e-02, -2.1205e-02, 1.4522e-02, 3.3279e-02, -4.9861e-01,
-2.6866e-03, 4.7418e-02, -1.0412e-02, -5.0259e-03, 5.9364e-02,
1.5291e-02, 1.2832e-02, 9.9840e-04, -4.6879e-01, 6.3109e-03,
-6.6550e-03, -4.7005e-01, 1.1081e-02, 6.3798e-03, -2.3701e-02,
-6.1321e-04, 9.4518e-03, -1.8441e-02, 1.9444e-02, -3.2495e-02,
-3.9736e-02, 1.2732e-02, -1.5738e-03, 1.7463e-02, -1.2696e-01,
3.1286e-01, -5.8906e-02, 1.7822e-02, -5.9794e-02, -1.9892e-02,
6.0129e-03, -2.2110e-02, 8.3146e-02, 1.1826e-03, -1.5764e-01,
-6.6654e-03, -2.4652e-02, 2.9888e-02, 1.3662e-02, 2.2539e-02,
-4.5877e-02, 1.7355e-02, 3.1344e-03, 3.8609e-03, 4.8026e-03,
1.1118e-02, 1.0949e-01, -1.5221e-02, 1.9277e-02, -6.6302e-03,
-7.1109e-03, 7.4188e-04, -6.3697e-03, -3.1777e-02, 1.2641e-03,
5.6136e-03, -1.3858e-02, 5.9672e-01, -2.9905e-02, -7.8330e-01,
8.2061e-04, 1.9875e-03, 4.8229e-02, -4.0310e-03, -7.4141e-01,
-8.2589e-02, 4.6838e-03, -1.1728e-03, -6.8659e-02, -2.1787e-02,
-2.3413e-02, 3.8247e-03, -3.5208e-02, 8.0469e-03, -6.0692e-03,
-1.8382e-02, -6.8114e-04, 8.2483e-01, 4.2232e-02, 2.6186e-02,
-3.5071e-02, 2.9950e-01, -3.9087e-02, 2.8266e-02, -3.4465e-02,
4.0329e-04, -2.1029e-02, 1.3218e-02, 5.7879e-01, -1.1618e-02,
-1.7881e-02, -5.4950e-02, -1.3292e-03, 2.9873e-02, -3.3105e-03,
-1.5792e-01, -1.4564e-02, 3.8663e-02, -8.6169e-02, -1.5880e-01,
-4.9658e-02, -1.6112e-03, 1.9248e-02, -4.6184e-03, -4.1773e-02,
-2.6493e-03, 2.0980e-02, 1.2356e-02, 1.5418e-02, 8.0534e-01,
1.7136e-02, -1.9514e-02, 3.1425e-03, 4.3635e-03, 9.3242e-03,
5.9385e-02, -1.6063e-02, 9.7715e-01, -1.0699e-01, 3.2006e-02,
-1.7259e-02, -2.2930e-02, -8.4327e-01, -1.0589e-02, 4.8237e-02,
9.1879e-03, 8.5088e-01, -3.9914e-03, -6.3607e-02, 2.3347e-02,
-2.6078e-02, -1.2018e-02, -3.9092e-01, 1.0435e-01, -4.1202e-02,
1.4983e-02, 1.4928e-02, 1.6260e-02, 2.1786e-02, 6.1203e-01,
-4.3203e-03, 2.1811e-02, 2.8324e-02, -1.6943e-02, 1.0289e-03,
-4.2235e-03, -3.9832e-02, -3.0189e-02, -3.2481e-03, 1.6847e-02,
-4.3612e-02, 5.2142e-01, -2.0684e-02, 3.4631e-02, 1.9373e-03,
1.2069e-01, -1.9904e-02, 9.1863e-03, 4.1725e-03, -8.4121e-02,
-2.2211e-03, 5.6895e-03, -1.3401e-02, -1.5374e-02, -2.5099e-02,
1.9727e-02, -1.7755e-01, 2.9352e-03, -2.2476e-01, 2.6460e-03,
1.8885e-02, 1.3120e-02, -3.8352e-02, -5.8401e-03, -1.1220e-02,
-1.9525e-02, 1.6225e-02, 1.8265e-02, -1.9113e-02, 4.5447e-01,
8.7089e-03, -6.5228e-03, -9.9888e-03, 6.5381e-02, -1.6652e-02,
-3.9072e-03, -1.2806e-01, -6.4722e-03, 6.2611e-05, -4.1967e-02,
-2.6738e-02, 8.5272e-02, -6.5992e-03, 2.0409e-02, -1.6775e-02,
-8.2542e-02, -9.1505e-03, 2.3265e-01, -2.0393e-02, -2.5408e-02,
3.7184e-01, 1.1421e-02, -1.4207e-04, -1.0234e-01, 1.9930e-03,
6.7073e-02, -1.1965e-02, 6.2451e-04, -4.2905e-02, 1.9731e-03,
-2.7423e-02, -1.0493e-02, -5.5997e-02, 3.5540e-03, 5.0565e-03,
1.5316e-04, 1.9976e-02, -6.1887e-02, -1.9239e-02, -2.8151e-02,
1.4671e-02, 3.8599e-03, -1.0113e-01, 3.7388e-02, 1.2063e-02,
9.1354e-03, 1.3958e-02, 1.5136e-03, -1.1466e-02, -1.6999e-02,
-7.6677e-02, -4.8566e-03, -2.4719e-01, 1.0974e-02, 8.4056e-03,
5.8064e-03, -2.4864e-01, -3.7818e-01, 5.0208e-02, -3.0094e-02,
2.8905e-04, 4.9162e-03, 1.2000e-02, -1.7560e-02, -1.5956e-01,
-9.3537e-03, -3.1023e-02, 7.7834e-01, -8.3942e-01, -9.3088e-03,
-1.3086e-02, 5.0306e-01, -1.4037e-03, -2.2345e-01, 1.4390e-03,
3.5904e-01, -1.2583e-02, 2.1788e-02, 3.1457e-03, -1.3125e-02,
-3.9513e-02, -1.5309e-02, -7.3315e-04, -2.7821e-02, 6.5916e-04,
9.3677e-03, -1.9807e-02, -6.9219e-03, 2.0447e-01, -1.7606e-02,
-2.0071e-02, -1.5081e-02, 3.1485e-01, 8.7187e-03, -2.3410e-02,
-3.4283e-03, -5.5052e-03, -5.2397e-02, 4.0907e-02, 1.0465e-03,
-4.0619e-01, 2.4019e-02, 5.6711e-01, -8.7051e-02, -5.8143e-03,
-6.2758e-03, 6.5046e-03, 5.2147e-01, 2.7435e-01, -1.7838e-03,
1.8702e-02, 9.8643e-03, 2.2815e-02, -1.3442e-02, -1.9221e-03,
-2.5178e-02, 2.8828e-01, -9.8848e-03, -8.6106e-03, -1.6368e-01,
8.5245e-03, 2.3074e-02, 1.0927e-02, 2.3362e-03, -5.0312e-03,
-1.4515e-03, 6.0329e-03, 1.7541e-03, 2.2712e-02, -6.2920e-02,
1.9621e-02, -5.9920e-03, 9.3095e-03, -1.2057e-02, -3.4766e-02,
7.0962e-02, 3.8835e-01, -6.9094e-03, 1.4411e-02, 2.9177e-02,
5.3793e-03, -2.1615e-02, 2.6376e-02, -1.1585e-03, 4.0472e-02,
1.1851e-02, 1.5705e-02, 3.8531e-02, 6.8253e-03, 1.0445e-02,
-9.5043e-03, -1.3340e-02, 6.6893e-03, 7.8108e-03, 8.2606e-03,
7.3462e-02, 4.5651e-01, -1.9310e-02, -5.4200e-02, -3.6836e-03,
8.5394e-03, 5.6307e-03, -2.4418e-02, -7.0981e-03, -2.5901e-01,
-2.2678e-02, 6.2244e-02, 2.3416e-02, 1.8405e-03, -1.6070e-02,
-2.2669e-03, -3.3542e-02, 5.1731e-01, -2.7881e-02, -7.9796e-02,
4.5613e-03, -4.0021e-03, 4.8400e-03, -3.7174e-03, 2.0248e-02,
2.0703e-02, 6.0141e-03, 3.3654e-02, -1.0999e-02, -4.6471e-01,
-2.7094e-01, -3.0672e-02, 6.7943e-03, 4.8118e-02, -1.1301e+00,
2.7343e-03, -3.5806e-02, -1.0150e-01, -5.5286e-03, -5.1283e-03,
-1.7647e-03, -1.4113e-02, -4.1544e-01], device='cuda:0',
grad_fn=<BroadcastBackward>), [16, 16], [0, 0], [1, 1], False, [0, 0], 1), **{})
While executing %self_vit_embeddings_patch_embeddings_projection : [#users=1] = call_module[target=self_vit_embeddings_patch_embeddings_projection](args = (%pixel_values,), kwargs = {})
Original traceback:
File "/home/ubuntu/transformers/src/transformers/models/vit/modeling_vit.py", line 175, in forward
embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2)
| File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
| File "/home/ubuntu/transformers/src/transformers/models/vit/modeling_vit.py", line 117, in forward
embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
| File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
| File "/home/ubuntu/transformers/src/transformers/models/vit/modeling_vit.py", line 573, in forward
embedding_output = self.embeddings(
| File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
| File "/home/ubuntu/transformers/src/transformers/models/vit/modeling_vit.py", line 787, in forward
outputs = self.vit(
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21826/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21825
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21825/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21825/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21825/events
|
https://github.com/huggingface/transformers/pull/21825
| 1,601,659,931
|
PR_kwDOCUB6oc5K22jY
| 21,825
|
Rename `MobileViTModelTest` to `TFMobileViTModelTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"No worry :-) "
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
@sayakpaul Let's give TF a bit more love ❤️ 🙏.
(joking aside, having consistent and proper prefixes makes things easier)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21825/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21825",
"html_url": "https://github.com/huggingface/transformers/pull/21825",
"diff_url": "https://github.com/huggingface/transformers/pull/21825.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21825.patch",
"merged_at": 1677568230000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21824
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21824/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21824/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21824/events
|
https://github.com/huggingface/transformers/pull/21824
| 1,601,478,857
|
PR_kwDOCUB6oc5K2PjC
| 21,824
|
TTS fine-tuning for SpeechT5
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Requesting review from @ArthurZucker for the custom STFT / log-Mel feature extraction components (`feature_extraction_speecht5.py` is the file of interest)",
"Gently pinging @ArthurZucker :)",
"Will review in 1h! Sorry for the delay ",
"> * Have the slow integration tests for the SpeechT5 models been run to check outputs are the same with the processing updates?\r\n\r\nThe outputs are not the same because the processing of the labels changed. But that's OK since the labels weren't used up to this point anyway.\r\n\r\n> * Am I right in understanding `stop_labels` were never used (and so removal doesn't affect things?)\r\n\r\nCorrect.\r\n\r\n> * With `reduction_factor` being moved to `shift_spectrograms_right`, does this effectively mean the `input_values` output from the processor has changed for the same config?\r\n\r\nIt didn't affect the `input_values`, only the labels. So nothing changed there for the normal operation of the model.",
"@amyeroberts If you're OK with the changes, I think this can be merged now. The failing tests seem unrelated to SpeechT5.",
"I'm pretty sure no one was using any of these properties before, since we only released SpeechT5 very recently and no one would have used it for training yet. Adding deprecation warnings seems excessive to me in this case.",
"OK, put frame_signal_scale and reduction_factor back and added a deprecation warning.",
"If you're all happy with it, feel free to merge (I don't have rights for that). 😃 ",
"@hollance - sorry, my bad, I thought you did! "
] | 1,677
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds fine-tuning support for SpeechT5, in particular the TTS model.
The loss function is a combination of L1 loss for the mel-spectrograms, BCE for the stop token prediction, and (optionally) guided attention loss to persuade the cross-attentions to be diagonal.
The STFT feature extraction has been sped up, which also means it currently assumes the frame size is a power of two and throws an error otherwise.
The feature extractor no longer outputs a `stop_labels` target. Padded areas in the spectrogram target are assumed to have the value -100 during training; from this the stop labels are computed automatically.
Various other small fixes to the tokenizer, processor, etc to support fine-tuning.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21824/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/21824/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21824",
"html_url": "https://github.com/huggingface/transformers/pull/21824",
"diff_url": "https://github.com/huggingface/transformers/pull/21824.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21824.patch",
"merged_at": 1681809151000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21823
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21823/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21823/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21823/events
|
https://github.com/huggingface/transformers/pull/21823
| 1,601,461,206
|
PR_kwDOCUB6oc5K2LsF
| 21,823
|
Make Slack CI reporting stronger
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Make Slack CI reporting stronger.
The most important change in this PR is to **use a token when we need to grab some GitHub workflow/jobs information** using api call, like
```python
https://api.github.com/repos/huggingface/transformers/actions/runs
```
to get all job links.
**This could avoid reaching the rate limit in CI runs and keep the CI reporting working.**
Such error occurred once on 2023/02/24, see [this run](https://github.com/huggingface/transformers/actions/runs/4258755021/jobs/7424107028). The log shows `Unknown error, could not fetch links. 'jobs'`, but the underlying reason (I strongly believe) is the rate limit is reached, and the api call returns 2 keys `message` and `documentation` without the key `jobs`.
The other changes are just to make things better too.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21823/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21823",
"html_url": "https://github.com/huggingface/transformers/pull/21823",
"diff_url": "https://github.com/huggingface/transformers/pull/21823.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21823.patch",
"merged_at": 1677600765000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21822
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21822/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21822/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21822/events
|
https://github.com/huggingface/transformers/issues/21822
| 1,601,307,323
|
I_kwDOCUB6oc5fcgK7
| 21,822
|
Extend Callback API for remote execution of ClearML Experiments
|
{
"login": "thepycoder",
"id": 11781950,
"node_id": "MDQ6VXNlcjExNzgxOTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11781950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thepycoder",
"html_url": "https://github.com/thepycoder",
"followers_url": "https://api.github.com/users/thepycoder/followers",
"following_url": "https://api.github.com/users/thepycoder/following{/other_user}",
"gists_url": "https://api.github.com/users/thepycoder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thepycoder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thepycoder/subscriptions",
"organizations_url": "https://api.github.com/users/thepycoder/orgs",
"repos_url": "https://api.github.com/users/thepycoder/repos",
"events_url": "https://api.github.com/users/thepycoder/events{/privacy}",
"received_events_url": "https://api.github.com/users/thepycoder/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi there. The `Trainer` is not allowed to modify its own `TrainingArguments`, a design choice we made so that reproducibility and resuming from a checkpoint work properly. This is why the callbacks are also not allowed to change the training arguments. It's probably best for this use case if you simply subclass the `Trainer` API and either add a new method or override what you need.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
### Feature request
Hi!
Not that long ago, we added support for the ClearML experiment manager as a training callback, and the general feedback has been good so far! However, ClearML is more than an experiment manager alone.
In order to allow users to quickly experiment, ClearML can clone an existing experiment from the UI and then override the originally captured hyperparameters. It does this by injecting the new parameter values into the code at runtime. A user can then schedule and run this edited experiment clone on a remote machine.
But in order for this functionality to work properly, ClearML has to be able to initialize, access and overwrite the training parameters even before they are first used by the Trainer. The current callback implementation does not allow this.
Do you think this is something worth considering to add? I suspect it's rather easy to add a new, very early callback route, but it sounds harder to me to allow a callback to override the training arguments. What do you think?
### Motivation
We received user feedback on our own support slack channel of users trying to run Transformers remotely, but the parameters not being overridden. More advanced ClearML functionality like pipelines and HPO depend on this functionality to properly work.
### Your contribution
We'd be very willing to make a PR, this issue is meant to discuss if you agree that it can be properly added and if so, how you would like to see it practically :) Thank you for the consideration!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21822/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21821
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21821/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21821/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21821/events
|
https://github.com/huggingface/transformers/issues/21821
| 1,600,982,282
|
I_kwDOCUB6oc5fbQ0K
| 21,821
|
Two tfevent files are being generated for each run of trainer
|
{
"login": "XanderWA",
"id": 125456442,
"node_id": "U_kgDOB3pQOg",
"avatar_url": "https://avatars.githubusercontent.com/u/125456442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XanderWA",
"html_url": "https://github.com/XanderWA",
"followers_url": "https://api.github.com/users/XanderWA/followers",
"following_url": "https://api.github.com/users/XanderWA/following{/other_user}",
"gists_url": "https://api.github.com/users/XanderWA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XanderWA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XanderWA/subscriptions",
"organizations_url": "https://api.github.com/users/XanderWA/orgs",
"repos_url": "https://api.github.com/users/XanderWA/repos",
"events_url": "https://api.github.com/users/XanderWA/events{/privacy}",
"received_events_url": "https://api.github.com/users/XanderWA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I have no idea. Let us know if you find the reason/how to fix it!",
"also face this issue by adding the tensorboard callback in the examples/language_modeling/run_mlm.py",
"Face the same issue:\r\n- transformers: 4.26.1\r\n- tensorboard: 2.12.0",
"Also faces this issue\r\nAs a workaround, I use this command to delete the duplicated directory if somebody is really annoyed by this.\r\n```bash\r\n find . -type d -name \"*[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].[0-9][0-9][0-9][0-9][0-9][0-9]\" -exec rm -rv {} \\;\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
NONE
| null |
### System Info
Each run of the trainer two tfevent files are being generated, it looks like this:
/runs
--Feb27_09-46-42_...
----events.out.tfevents....0
----/1677491207.0429652
------events.out.tfevents....1
When reading these files with TensorBoard I don't get any output from the .1 file, how can I get rid of it (because these clutter my TensorBoard) or get actual data from it?
Thanks in advance
@sgugger
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Simply starting any training.
### Expected behavior
One tfevents file or a valid output from the .1 file.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21821/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21820
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21820/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21820/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21820/events
|
https://github.com/huggingface/transformers/issues/21820
| 1,600,974,936
|
I_kwDOCUB6oc5fbPBY
| 21,820
|
rag-end2end-retriever Training Time Results
|
{
"login": "jamesoneill12",
"id": 11809091,
"node_id": "MDQ6VXNlcjExODA5MDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/11809091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesoneill12",
"html_url": "https://github.com/jamesoneill12",
"followers_url": "https://api.github.com/users/jamesoneill12/followers",
"following_url": "https://api.github.com/users/jamesoneill12/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesoneill12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesoneill12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesoneill12/subscriptions",
"organizations_url": "https://api.github.com/users/jamesoneill12/orgs",
"repos_url": "https://api.github.com/users/jamesoneill12/repos",
"events_url": "https://api.github.com/users/jamesoneill12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesoneill12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
NONE
| null |
### Feature request
Hi,
I would suggest including the training runtime for the new version of [RAG](https://github.com/huggingface/transformers/tree/main/examples/research_projects/rag-end2end-retriever) that is end-to-end. The hyperparameters for the results at the end of the README.md (under the "Comparison of end2end RAG (including DPR finetuning) VS original-RAG" section) are that below, but I've no idea the training time for these experiments on SQUAD. Currently my runtime on 2 GPUs is looking like months, so I want to know if there's something I've missed that causing training to be so slow or if it takes this long to run RAG end-to-end.
--gpus 4
--train_batch_size 4
--eval_batch_size
--max_source_length 128
--max_target_length 25
--val_max_target_length 25
--test_max_target_length 25
--label_smoothing 0.1
--dropout 0.1
--attention_dropout 0.1
--weight_decay 0.001
--adam_epsilon 1e-08
--max_grad_norm 0.1
--lr_scheduler polynomial
--learning_rate 3e-05
--num_train_epochs 10
--warmup_steps 500
--gradient_accumulation_steps 4
--distributed_retriever ray
--num_retrieval_workers 4
Thanks, James
### Motivation
Saves a lot of time before researchers decide if they've enough resources to work and build on this work or not. Also, suggestions on how to run training with less resources would also be useful.
### Your contribution
Not sure it requires much help, apart from the original authors (or at least whoever ran the experiments corresponding to the results in the readme) including training time numbers in the readme results section.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21820/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21819
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21819/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21819/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21819/events
|
https://github.com/huggingface/transformers/pull/21819
| 1,600,951,577
|
PR_kwDOCUB6oc5K0dpZ
| 21,819
|
Add Seaformer model
|
{
"login": "inderpreetsingh01",
"id": 54892545,
"node_id": "MDQ6VXNlcjU0ODkyNTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/54892545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inderpreetsingh01",
"html_url": "https://github.com/inderpreetsingh01",
"followers_url": "https://api.github.com/users/inderpreetsingh01/followers",
"following_url": "https://api.github.com/users/inderpreetsingh01/following{/other_user}",
"gists_url": "https://api.github.com/users/inderpreetsingh01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/inderpreetsingh01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/inderpreetsingh01/subscriptions",
"organizations_url": "https://api.github.com/users/inderpreetsingh01/orgs",
"repos_url": "https://api.github.com/users/inderpreetsingh01/repos",
"events_url": "https://api.github.com/users/inderpreetsingh01/events{/privacy}",
"received_events_url": "https://api.github.com/users/inderpreetsingh01/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @inderpreetsingh01, thank you! You can ping me once the PR is ready is to be reviewed. \r\n\r\nYou can follow the [official guidelines](https://huggingface.co/docs/transformers/add_new_model) to learn how to prepare the configuration, image processor and modeling files to replicate the original work such that forward propagating an image through the HF and original implementation yields the same results.",
"> # What does this PR do?\r\n> Fixes #21668 Seaformer is a two-branch architecture with Squeeze enhanced Axial Transformer for semantic segmentation on mobile devices. Supersedes #21774\r\n> \r\n> ## Before submitting\r\n> * [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).\r\n> * [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),\r\n> Pull Request section?\r\n> * [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? [Add SeaFormer model #21668](https://github.com/huggingface/transformers/issues/21668)\r\n> * [x] Did you make sure to update the documentation with your changes? Here are the\r\n> [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and\r\n> [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).\r\n> * [ ] Did you write any new necessary tests?\r\n> \r\n> ## Who can review?\r\n> @alaradirik thanks for offering help with this PR, please let me know about any changes required.\r\n\r\nThe PR is just initialized using SegFormer, I can do a review once the SeaFormer model is implemented.",
"Hi @alaradirik, I have added seaformer implementation in modeling file and updated the conversion and configuration scripts, I have ran a forward pass in notebook and output is same as the original seaformer model. Can you please review it and let me know of any changes required? I am yet to do the testing part. ",
"Hi @alaradirik thanks for the detailed review :) I have uploaded the converted model to the hub here Inderpreet01/seaformer-semantic-segmentation-large, will work on your comments and update the pr.\r\nThanks ",
"> Hi @alaradirik thanks for the detailed review :) I have uploaded the converted model to the hub here Inderpreet01/seaformer-semantic-segmentation-large, will work on your comments and update the pr. Thanks\r\n\r\nThank you! Feel free to ping me when you'd like me to do the final review",
"Hi @alaradirik I have worked on the changes you mentioned, two tests are failing in test_modeling_seaformer.py\r\n\r\nSeaformerModelTest::test_initialization - AssertionError: -6.169999778649071e-06 not found in [0.0, 1.0]\r\nI have normally initialized the parameters so negative values are expected.\r\n\r\nSeaformerModelTest::test_config - ValueError: The following keys were not properly set in the config:\r\nlabel2id and id2label are having 150 items but it is expecting 1 item in test_configuration_common.py [config_common_kwargs](https://github.com/huggingface/transformers/blob/c612628045822f909020f7eb6784c79700813eda/tests/test_configuration_common.py#L78-L79) dictionary is having id2label and label2id key dictionary with one item as value.\r\n\r\nCan you please help me with them thanks.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21819). All of your documentation changes will be reflected on that endpoint.",
"Also I have worked on the checks and most of them are successful, will need your help with the remaining three checks. thanks.",
"> Also I have worked on the checks and most of them are successful, will need your help with the remaining three checks. thanks.\r\n\r\nHi @inderpreetsingh01, I'll be taking a look shortly!",
"Hi @inderpreetsingh01, I took a look at the code and failed tests and saw that some of the failures are due to unrelated models. Could you rebase to main by clicking on the _Synch fork_ button on your [branch](https://github.com/inderpreetsingh01/transformers/tree/add_seaformer_model)?\r\n\r\nThe modeling test failure stemming from the label mapping is probably just due to setting a `num_labels` attribute within `SeaformerConfig`. All config classes inherit from the `PretrainedConfig` class, which computes the `num_labels` based on the `id2label` and `label2id` attributes, which are initialized to have 2 labels by default. You should remove the `num_labels` attribute and overwrite the default `id2label` and `label2id` attributes within the conversion script. You can take a look at the configuration, conversion and test scripts of MaskFormer and Mask2Former to see how that's done.\r\n\r\nHope this helps!\r\n\r\n",
"Hi @alaradirik, thanks for your response, removing `num_labels` from config has resolved that testcase, can you please help with this test case as well \r\n`SeaformerModelTest::test_initialization - AssertionError: -6.169999778649071e-06 not found in [0.0, 1.0]`\r\nI have normally initialized the parameters so negative values are expected.\r\n\r\nI have looked at maskformer and segformer but not able to figure this out.",
"actually this test is getting skipped in segformer model which also initializes weights normally.",
"> actually this test is getting skipped in segformer model which also initializes weights normally.\r\n\r\nHi @inderpreetsingh01, sorry for my late reply, I was off due to moving. You can overwrite the test by creating a test with the same name - `test_initialization` - as the weight initialization is inline with the original model. You can take a look at common test functions defined over [here](https://github.com/huggingface/transformers/blob/main/tests/test_modeling_common.py#L510) to see what this test does.",
"Hi @alaradirik thanks for reply, where should i create this test with the same name?",
"Hi @alaradirik can you please do the final review? thanks",
"@inderpreetsingh01 Thanks for adding this model! Ping me when the PR is ready for review (once all of @alaradirik's comments have been addressed and tests are passing). ",
"@alaradirik thanks for the review, @amyeroberts sure will ping you once model is ready",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,684
| 1,684
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21668
Seaformer is a two-branch architecture with Squeeze enhanced Axial Transformer for semantic segmentation on mobile devices.
<br>
Supersedes #21774
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #21668
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@alaradirik thanks for offering help with this PR, please let me know about any changes required.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21819/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21819",
"html_url": "https://github.com/huggingface/transformers/pull/21819",
"diff_url": "https://github.com/huggingface/transformers/pull/21819.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21819.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21818
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21818/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21818/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21818/events
|
https://github.com/huggingface/transformers/pull/21818
| 1,600,936,658
|
PR_kwDOCUB6oc5K0aYe
| 21,818
|
Fix gradient checkpointing bug in git
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21818/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21818",
"html_url": "https://github.com/huggingface/transformers/pull/21818",
"diff_url": "https://github.com/huggingface/transformers/pull/21818.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21818.patch",
"merged_at": 1677588394000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21817
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21817/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21817/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21817/events
|
https://github.com/huggingface/transformers/pull/21817
| 1,600,917,823
|
PR_kwDOCUB6oc5K0WZn
| 21,817
|
[`Blip2`] Add `Blip2Model`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks! Once it's merged, I'll create a small PR to update the troubleshooting section. "
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a new class `Blip2Model`, so that the model can be mapped in `AutoModel` mapping and also used by users that wants to conveniently extract text, image, & the so-called q-former features from the model.
This PR also addresses this comment: https://github.com/huggingface/transformers/pull/21708#pullrequestreview-1308704909
I decided to still keep `AutoModelForCausalLM` & `AutoModelForSeq2SeqLM` for `self.language_model` as using `AutoModel` there leads to keys that are not properly loaded from the Hub. Let me know if you think that this is a mistake and should be addressed differently
cc @sgugger @MKhalusova @stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21817/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21817/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21817",
"html_url": "https://github.com/huggingface/transformers/pull/21817",
"diff_url": "https://github.com/huggingface/transformers/pull/21817.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21817.patch",
"merged_at": 1677595376000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21816
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21816/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21816/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21816/events
|
https://github.com/huggingface/transformers/pull/21816
| 1,600,862,273
|
PR_kwDOCUB6oc5K0KVN
| 21,816
|
Fix gradient checkpointing imagegpt
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21816/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21816",
"html_url": "https://github.com/huggingface/transformers/pull/21816",
"diff_url": "https://github.com/huggingface/transformers/pull/21816.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21816.patch",
"merged_at": 1677588425000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21815
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21815/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21815/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21815/events
|
https://github.com/huggingface/transformers/pull/21815
| 1,600,815,097
|
PR_kwDOCUB6oc5K0AM1
| 21,815
|
Fix gradient checkpointing bug in gptneox
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Apologies, hand't pushed the latest commit. All done now!",
"Awesome, thank you for the contribution <3 "
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21815/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21815/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21815",
"html_url": "https://github.com/huggingface/transformers/pull/21815",
"diff_url": "https://github.com/huggingface/transformers/pull/21815.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21815.patch",
"merged_at": 1677509372000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21814
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21814/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21814/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21814/events
|
https://github.com/huggingface/transformers/pull/21814
| 1,600,814,454
|
PR_kwDOCUB6oc5K0AD6
| 21,814
|
[DETR and friends] Remove is_timm_available
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There is a pretty big conflict in the test modeling DETR file, can you fix it Niels?"
] | 1,677
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR:
- removes the `is_timm_available` dependency check for DETR and friends, uses `is_torch_available` instead and uses `requires_backends["timm"]` in case `config.use_timm_backbone=True`.
- adapts DETR's conversion script to make DETR work with our `AutoBackbone` class, rather than the timm backbone. This way one can use DETR by only installing `Transformers`.
To do:
- [x] upload checkpoint to the hub
- [x] add integration test
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21814/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21814",
"html_url": "https://github.com/huggingface/transformers/pull/21814",
"diff_url": "https://github.com/huggingface/transformers/pull/21814.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21814.patch",
"merged_at": 1678220380000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21813
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21813/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21813/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21813/events
|
https://github.com/huggingface/transformers/issues/21813
| 1,600,784,247
|
I_kwDOCUB6oc5fagd3
| 21,813
|
Error when using BART for Prefix Tuning. Replace `view` with `reshape` in `BartAttention`?
|
{
"login": "jbmcd",
"id": 53952163,
"node_id": "MDQ6VXNlcjUzOTUyMTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/53952163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbmcd",
"html_url": "https://github.com/jbmcd",
"followers_url": "https://api.github.com/users/jbmcd/followers",
"following_url": "https://api.github.com/users/jbmcd/following{/other_user}",
"gists_url": "https://api.github.com/users/jbmcd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbmcd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbmcd/subscriptions",
"organizations_url": "https://api.github.com/users/jbmcd/orgs",
"repos_url": "https://api.github.com/users/jbmcd/repos",
"events_url": "https://api.github.com/users/jbmcd/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbmcd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello! Thanks a lot for reporting. Will open a PR to fix this 😉 \r\n",
"@ArthurZucker It seems in GPTJ there also have the same problem when using prefix-tuning trained model to generate text\r\n```\r\n File \"/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/generation/utils.py\", line 1391, in generate\r\n return self.greedy_search(\r\n File \"/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/generation/utils.py\", line 2179, in greedy_search\r\n outputs = self(\r\n File \"/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py\", line 813, in forward\r\n transformer_outputs = self.transformer(\r\n File \"/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py\", line 575, in forward\r\n position_ids = position_ids.view(-1, input_shape[-1])\r\nRuntimeError: shape '[-1, 108]' is invalid for input of size 128\r\n```\r\n___________________________________________________________________________\r\nupdate\r\nwhen not passing 'attention_mask', the error changed to:\r\n```\r\n File \"/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py\", line 302, in forward\r\n attn_outputs = self.attn(\r\n File \"/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py\", line 251, in forward\r\n attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)\r\n File \"/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py\", line 176, in _attn\r\n attn_weights = attn_weights + attention_mask\r\nRuntimeError: The size of tensor a (128) must match the size of tensor b (108) at non-singleton dimension 3\r\n```\r\nand I use num_virtual_tokens=20, which seems is a problem of `PEFT`?",
"Hey, not really sure this is the same, the error does not involve having to replace `view` with `reshape`. You seem to have a problem with the positional ids. They are deprecated see #21869. ",
"> Hey, not really sure this is the same, the error does not involve having to replace `view` with `reshape`. You seem to have a problem with the positional ids. They are deprecated see #21869.\r\n\r\nyeah it seems not the same root cause, I will turn to PEFT to find resolution, thanks for your reply!\r\n\r\n-------------------------------------------------------------------------------------------------------------------\r\n\r\nthe problem solved withou any code changing but just install transformers' main branch from source"
] | 1,677
| 1,678
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using Prefix Tuning with BART in PEFT, an error occurs for some edge cases, see [#129](https://github.com/huggingface/peft/issues/129#issue-1598538584), with a suggestion to replace `view` with `reshape` in `BartAttention`.
### Expected behavior
I would expect the example code provided in [#129](https://github.com/huggingface/peft/issues/129#issue-1598538584) to work regardless of the length of the input.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21813/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21812
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21812/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21812/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21812/events
|
https://github.com/huggingface/transformers/pull/21812
| 1,600,581,636
|
PR_kwDOCUB6oc5KzNpV
| 21,812
|
update FSDP and add XLA-FSDP documentation
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @pacman100 and @sgugger ! This is exciting!"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
1. update FSDP and add XLA-FSDP documentation
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21812/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21812/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21812",
"html_url": "https://github.com/huggingface/transformers/pull/21812",
"diff_url": "https://github.com/huggingface/transformers/pull/21812.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21812.patch",
"merged_at": 1677680468000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21811
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21811/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21811/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21811/events
|
https://github.com/huggingface/transformers/pull/21811
| 1,600,520,603
|
PR_kwDOCUB6oc5KzAXd
| 21,811
|
Fix the issue of blip model returning loss even when the label is not provided.
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @younesbelkada and @amyeroberts ",
"> \r\n\r\n@younesbelkada \r\nOne of the test (test_inference_image_captioning_fp16) is failing, not sure if it is related to my changes . \r\n",
"Hi @raghavanone \r\nHum I just went through the daily CI report and this test seems to be not failing on our end, can you share with us the traceback of the error?",
"> Hi @raghavanone Hum I just went through the daily CI report and this test seems to be not failing on our end, can you share with us the traceback of the error?\r\n\r\n```\r\ntests/models/blip/test_modeling_blip.py:1113 (BlipModelIntegrationTest.test_inference_image_captioning_fp16)\r\nself = <tests.models.blip.test_modeling_blip.BlipModelIntegrationTest testMethod=test_inference_image_captioning_fp16>\r\n\r\n def test_inference_image_captioning_fp16(self):\r\n model = BlipForConditionalGeneration.from_pretrained(\r\n \"Salesforce/blip-image-captioning-base\", torch_dtype=torch.float16\r\n ).to(torch_device)\r\n processor = BlipProcessor.from_pretrained(\"Salesforce/blip-image-captioning-base\")\r\n image = prepare_img()\r\n \r\n # image only\r\n inputs = processor(images=image, return_tensors=\"pt\").to(torch_device, torch.float16)\r\n \r\n> predictions = model.generate(**inputs)\r\n\r\ntests/models/blip/test_modeling_blip.py:1124: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/autograd/grad_mode.py:27: in decorate_context\r\n return func(*args, **kwargs)\r\nsrc/transformers/models/blip/modeling_blip.py:1068: in generate\r\n vision_outputs = self.vision_model(\r\n/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/blip/modeling_blip.py:694: in forward\r\n hidden_states = self.embeddings(pixel_values)\r\n/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/blip/modeling_blip.py:241: in forward\r\n patch_embeds = self.patch_embedding(pixel_values) # shape = [*, width, grid, grid]\r\n/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\n/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/nn/modules/conv.py:463: in forward\r\n return self._conv_forward(input, self.weight, self.bias)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16))\r\ninput = tensor([[[[ 0.8647, 0.9229, 0.9375, ..., 1.7549, 1.7549, 1.7549],\r\n [ 0.9082, 0.9375, 0.9521, ..., 1....2856, -0.3569],\r\n [-0.3142, -0.3425, -0.3569, ..., -0.3000, -0.3569, -0.3994]]]],\r\n dtype=torch.float16)\r\nweight = Parameter containing:\r\ntensor([[[[ 3.3875e-03, 1.4102e-04, 7.0906e-04, ..., -4.6539e-03,\r\n 1.2560e-03, -5....3.7727e-03, ..., -1.3084e-03,\r\n 4.8304e-04, 7.3357e-03]]]], dtype=torch.float16,\r\n requires_grad=True)\r\nbias = Parameter containing:\r\ntensor([ 7.6477e-02, 7.6233e-02, 2.6343e-01, 2.6718e-02, 3.8727e-02,\r\n 1.2962e-02, -2....6932e-02, -9.2529e-02,\r\n 7.5012e-02, 6.4812e-03, -1.7303e-02], dtype=torch.float16,\r\n requires_grad=True)\r\n\r\n def _conv_forward(self, input: Tensor, weight: Tensor, bias: Optional[Tensor]):\r\n if self.padding_mode != 'zeros':\r\n return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),\r\n weight, bias, self.stride,\r\n _pair(0), self.dilation, self.groups)\r\n> return F.conv2d(input, weight, bias, self.stride,\r\n self.padding, self.dilation, self.groups)\r\nE RuntimeError: \"slow_conv2d_cpu\" not implemented for 'Half'\r\n\r\n/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/nn/modules/conv.py:459: RuntimeError\r\n```",
"I see, you are not running the tests on GPU, if you don't have access to any GPU I can run the slow test for you\r\n(Also we might need to add `require_torch_gpu` decorator on this test, if you could also add it in this PR it would be great 🙏 )",
"> test_inference_image_captioning_fp16\r\n\r\nOh, I just realised that, I am adding the tag. ",
"@younesbelkada Did that test fail in your setup ? Is there something I have to fix ? "
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #21510
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21811/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21811",
"html_url": "https://github.com/huggingface/transformers/pull/21811",
"diff_url": "https://github.com/huggingface/transformers/pull/21811.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21811.patch",
"merged_at": 1677596048000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21810
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21810/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21810/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21810/events
|
https://github.com/huggingface/transformers/issues/21810
| 1,600,409,760
|
I_kwDOCUB6oc5fZFCg
| 21,810
|
fsmt Tokenizer.save_vocabulary Bug
|
{
"login": "lihaoxin2020",
"id": 77715908,
"node_id": "MDQ6VXNlcjc3NzE1OTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/77715908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lihaoxin2020",
"html_url": "https://github.com/lihaoxin2020",
"followers_url": "https://api.github.com/users/lihaoxin2020/followers",
"following_url": "https://api.github.com/users/lihaoxin2020/following{/other_user}",
"gists_url": "https://api.github.com/users/lihaoxin2020/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lihaoxin2020/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lihaoxin2020/subscriptions",
"organizations_url": "https://api.github.com/users/lihaoxin2020/orgs",
"repos_url": "https://api.github.com/users/lihaoxin2020/repos",
"events_url": "https://api.github.com/users/lihaoxin2020/events{/privacy}",
"received_events_url": "https://api.github.com/users/lihaoxin2020/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey, apparently, it is a known issue that the vocabulary has some holes. The problem with your training is probably not related as you mention that it `suddendly` drops and corrupts. Meaning up until some point everything works well no? 😉 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,683
| 1,683
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm using FSMT and try to reproduce the behavior of basic transformer on WMT16 translation task with [run_translation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation). It turns out that during the training, both valid loss and training loss go down normally, and valid BLEU scores increase gradually at the beginning, but suddenly dropped a lot and corrupt to ~0. After some investigation, I found the fsmt_tokenizer keeps complaining each time I try to save the training states.
You should be able to reproduce the complaint as easy as running the short script below:
```
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer
)
tokenizer = AutoTokenizer.from_pretrained("allenai/wmt16-en-de-12-1")
tokenizer.save_vocabulary("./tmp")
```
### Expected behavior
```
Saving vocabulary to ./tmp/merges.txt: BPE merge indices are not consecutive. Please check that the tokenizer is not corrupted!
```
And the training curves valid BLEU looks like [this](https://api.wandb.ai/links/alanlee/2zddajjr) and valid loss looks like [this](https://api.wandb.ai/links/alanlee/h77vb3bf), which show that there should be no gradient explosion but still broken.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21810/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21809
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21809/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21809/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21809/events
|
https://github.com/huggingface/transformers/issues/21809
| 1,600,138,435
|
I_kwDOCUB6oc5fYCzD
| 21,809
|
How to set language in Whisper pipeline for audio transcription?
|
{
"login": "melihogutcen",
"id": 43522440,
"node_id": "MDQ6VXNlcjQzNTIyNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/43522440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/melihogutcen",
"html_url": "https://github.com/melihogutcen",
"followers_url": "https://api.github.com/users/melihogutcen/followers",
"following_url": "https://api.github.com/users/melihogutcen/following{/other_user}",
"gists_url": "https://api.github.com/users/melihogutcen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/melihogutcen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/melihogutcen/subscriptions",
"organizations_url": "https://api.github.com/users/melihogutcen/orgs",
"repos_url": "https://api.github.com/users/melihogutcen/repos",
"events_url": "https://api.github.com/users/melihogutcen/events{/privacy}",
"received_events_url": "https://api.github.com/users/melihogutcen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@ArthurZucker ",
"You can add `generate_kwargs = {\"language\":\"<|tr|>\",\"task\": \"transcribe\"},` to your pipeline initialization and it should work. ",
"Updated the notebook with the following new line : \r\n> `pipe(speech_file, generate_kwargs = {\"task\":\"transcribe\", \"language\":\"<|fr|>\"} )`",
"Voila! I am able to set the language by using `generate_kwargs = {\"language\":\"<|tr|>\",\"task\": \"transcribe\"}` in pipeline initialization. Thanks.",
"Hello, I got same problem. But `generate_kwargs = {\"language\":\"<|tr|>\",\"task\": \"transcribe\"}` is not work for me. \r\n```python\r\nValueError: The following `model_kwargs` are not used by the model: ['task', 'language'] (note: typos in the generate arguments will also show up in this list)\r\n```\r\nHere is the code:\r\n```python\r\nfrom transformers import WhisperProcessor,WhisperForConditionalGeneration\r\nimport whisper\r\nfrom transformers import pipeline\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"./whisper_tiny_pytorch_model.bin\",config=\"./config.json\").to(\"cuda:0\")\r\nprocessor = WhisperProcessor.from_pretrained(\"./\")\r\naudio = whisper.load_audio(\"./a.flac\")\r\ni = processor(audio,return_tensors=\"pt\").input_features.to(\"cuda:0\")\r\npipe = pipeline(\r\n \"automatic-speech-recognition\",\r\n model=model,\r\n tokenizer=processor.tokenizer,\r\n feature_extractor=processor.feature_extractor,\r\n chunk_length_s=30,\r\n device=\"cuda:0\",\r\n)\r\nr = pipe(av, generate_kwargs = {\"task\":\"transcribe\", \"language\":\"japanese\"})\r\n```\r\nCould you help me?\r\n\r\nEnv:\r\npytorch==2.1.0.dev20230302+cu117\r\ntransformer==4.26.1\r\nwhisper model is download on huggingface.",
"Hey @AnestLarry, the language tag that you are using is wrong! \r\nAs you can see in the `generation_config.json`, the `lang_to_id` defines the mapping from language token to the actual input ids. \r\nWhat you should be using (and there is an example of this in the notebook [here ](https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=Ca4YYdtATxzo)) is the following:\r\n```python \r\n...\r\npipe( av, generate_kwargs = {\"language\"= \"<|ja|>\"}\r\n```",
"Hey @ArthurZucker , \r\n```python\r\nr = pipe(audio, generate_kwargs = {\"language\":\"<|ja|>\"})\r\n\r\nValueError: The following `model_kwargs` are not used by the model: ['language'] (note: typos in the generate arguments will also show up in this list)\r\n```\r\nI still got the same error. When I using `{\"language\": \"<|ja|>\"}` to `get_decoder_prompt_ids` (in a way direct to using model generate), I got a error tips to change my arg.\r\n```python\r\nprocessor.get_decoder_prompt_ids(language=\"<|ja|>\",task=\"transcribe\")\r\n\r\nValueError: Unsupported language: <|ja|>. Language should be one of: ['english', 'chinese', 'german', 'spanish', 'russian', 'korean', 'french', 'japanese', 'portuguese', 'turkish', 'polish', 'catalan', 'dutch', 'arabic', 'swedish', 'italian', 'indonesian', 'hindi', 'finnish', 'vietnamese', 'hebrew', 'ukrainian', 'greek', 'malay', 'czech', 'romanian', 'danish', 'hungarian', 'tamil', 'norwegian', 'thai', 'urdu', 'croatian', 'bulgarian', 'lithuanian', 'latin', 'maori', 'malayalam', 'welsh', 'slovak', 'telugu', 'persian', 'latvian', 'bengali', 'serbian', 'azerbaijani', 'slovenian', 'kannada', 'estonian', 'macedonian', 'breton', 'basque', 'icelandic', 'armenian', 'nepali', 'mongolian', 'bosnian', 'kazakh', 'albanian', 'swahili', 'galician', 'marathi', 'punjabi', 'sinhala', 'khmer', 'shona', 'yoruba', 'somali', 'afrikaans', 'occitan', 'georgian', 'belarusian', 'tajik', 'sindhi', 'gujarati', 'amharic', 'yiddish', 'lao', 'uzbek', 'faroese', 'haitian creole', 'pashto', 'turkmen', 'nynorsk', 'maltese', 'sanskrit', 'luxembourgish', 'myanmar', 'tibetan', 'tagalog', 'malagasy', 'assamese', 'tatar', 'hawaiian', 'lingala', 'hausa', 'bashkir', 'javanese', 'sundanese', 'burmese', 'valencian', 'flemish', 'haitian', 'letzeburgesch', 'pushto', 'panjabi', 'moldavian', 'moldovan', 'sinhalese', 'castilian'].\r\n```\r\nAnd I can get valid result with model generate.\r\n```python\r\nforced_decoder_ids = processor.get_decoder_prompt_ids(language=\"japanese\",task=\"transcribe\")\r\nr = model.generate(i,forced_decoder_ids = forced_decoder_ids)\r\n\r\nout: ['<|startoftranscript|><|ja|><|transcribe|><|notimestamps|>夜が開き出し...<|endoftext|>']\r\n```",
"Sorry I guess I should have been clearer: \r\n`pipe( av, generate_kwargs = {\"language\"= \"<|ja|>\", \"task\"=\"transcribe\"}`\r\n(I was just sharing how to fix the language)\r\nMoreover, this is not on the latest release, as the notebook mentions you have to use the `main` branch",
"Thank you for notion me the version problem ignored by me. I had run success (without error message) after install `main` branch. But `fix the language` still not work.\r\n```python\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"./whisper_tiny_pytorch_model.bin\",config=\"./config.json\").to(\"cuda:0\")\r\nprocessor = WhisperProcessor.from_pretrained(\"./\")\r\naudio = whisper.load_audio(\"./a.mp3\")\r\ni = processor(audio,return_tensors=\"pt\").input_features.to(\"cuda:0\")\r\npipe = pipeline(\r\n \"automatic-speech-recognition\",\r\n model=model,\r\n tokenizer=processor.tokenizer,\r\n feature_extractor=processor.feature_extractor,\r\n chunk_length_s=30,\r\n device=\"cuda:0\",\r\n)\r\n\r\nr = pipe(audio, generate_kwargs = {\"language\":\"<|ja|>\",\"task\":\"transcribe\"})\r\n{'text': \" I'm not going bit ...}\r\n```\r\nI fixed `ja` and got a English result. (`audio` is a japanese song.\r\nIs the code wrong though?",
"Try using the notebook I provided, your custom model might not be working and I can't debug it for you 😅 \r\nCould you try using the `openai/whisper-small` model as shown in the notbook? Then you can compare the configuration file and generation config \r\n",
"Very thank you. My model is download from huggingface without change anything from me. Just used `openai/whisper` to successfully complete the task. And I found that model file name look like effect the result. 😅\r\nChange model file name `whisper_tiny_pytorch_model.bin` to `pytorch_model.bin`, and no problem now.",
"Great that you no longer have an issue! Thanks for bearing with me 🤗 ",
"When I am installing the newest Transformers, I am now getting the following error setting language in the pipeline:\r\n\r\n```\r\n File \"/Users/me/miniconda3/envs/torch-gpu/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1570, in generate\r\n if generation_config.language in generation_config.lang_to_id.keys():\r\nAttributeError: 'GenerationConfig' object has no attribute 'lang_to_id'\r\n```",
"I had this same issue with our finetuned [whisper-large-rixvox](https://huggingface.co/KBLab/whisper-large-rixvox/tree/main) @peregilk . \r\n\r\nI think what happens is that finetuned Whisper models typically are already configured to predict a specific language during finetuning. When the people who train these models save a checkpoint, there is no \"GenerationConfig\" generated, as the model is still hardcoded to predict a specific language. \r\n\r\nE.g. see [generation_config.json from OpenAI/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2/blob/main/generation_config.json) and compare against a finetuned version of whisper where [generation_config.json is missing](https://huggingface.co/KBLab/whisper-large-rixvox/tree/main). \r\n\r\nIf the person who trains a finetuned whisper follows [Huggingface's finetuning instructions](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/fine-tune-whisper-non-streaming.ipynb), there will be no GenerationConfig for the model. \r\n\r\nPerhaps there should be a better error message for this @ArthurZucker .\r\n\r\nThe solution is simply to not specify `generate_kwargs` at all for any finetuned model where `generation_config.json` is missing. The finetuned model will predict in the language it was finetuned on without the `generate_kwargs`.",
"Thanks for reporting @peregilk and @Lauler! This is probably quite a good fix right @ArthurZucker? We don't use any of the `generation_config` logic unless `generation_config.json` is present on the Hub?",
"I believe the current workaround is to update the generation config according to this comment: https://github.com/huggingface/transformers/issues/21878#issuecomment-1451902363\r\n\r\nThis should fix both issues described above. It's cumbersome though and ideally we'd have a way of handling it in transformers!",
"Detecting language using up to the first 30 seconds. Use `--language` to specify the language\r\nDetected language: Javanese\r\nHello, i'm using whisper to translate. how to change the detected langunge? what is the code? thanks in advance",
"@ArthurZucker @sanchit-gandhi thanks, this worked, but I would expect that `model.config.suppress_tokens = [50290]` would work as well (50290 corresponds to the index of \"<|ur|>\". I wanted to supperess urdu) if I do not want to use pipeline but I still get the transcription in urdu. But in this case, what worked for me was `model.config.forced_decoder_ids = processor.tokenizer.get_decoder_prompt_ids(language=\"english\", task=\"transcribe\")`. Just curious what is going on behind the scene. Thanks",
"Hey @kamalojasv181 - could you try updating the `generation_config`, since it receives priority over the config:\r\n```python\r\nmodel.generation_config.suppress_tokens.append(50290)\r\n```\r\n=> this should set the probability of the `<|ur|>` to zero during generation.\r\n\r\nThe recommended API is now to pass `language=..., task=...` directly to generate. This takes precedence over all generation config / config attributes, and is far easier to set: https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperForConditionalGeneration.generate.language\r\n\r\nE.g. see how we set the `language=\"french\"` and `task=\"transcribe\"` for this French speech transcription example:\r\n```python\r\nfrom transformers import WhisperProcessor, WhisperForConditionalGeneration\r\nfrom datasets import Audio, load_dataset\r\n\r\n# load model and processor\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-large-v2\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-large-v2\")\r\n\r\n# load streaming dataset and read first audio sample\r\nds = load_dataset(\"common_voice\", \"fr\", split=\"test\", streaming=True)\r\nds = ds.cast_column(\"audio\", Audio(sampling_rate=16_000))\r\ninput_speech = next(iter(ds))[\"audio\"]\r\n\r\n# pre-process audio sample to log-mel spectrogram\r\ninput_features = processor(input_speech[\"array\"], sampling_rate=input_speech[\"sampling_rate\"], return_tensors=\"pt\").input_features\r\n\r\n# generate token ids\r\npredicted_ids = model.generate(input_features, language=\"french\", task=\"transcribe\")\r\n\r\n# decode token ids to text\r\ntranscription = processor.batch_decode(predicted_ids, skip_special_tokens=True)\r\nprint(transcription)\r\n```\r\n\r\nThis does the same thing as the forced decoder ids under the hood, setting the task/language token for Whisper: https://huggingface.co/openai/whisper-large-v2#usage",
"Thanks"
] | 1,677
| 1,701
| 1,677
|
NONE
| null |
### Problem
Hello,
I followed this notebook for Whisper pipelines. https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor?usp=sharing#scrollTo=Ca4YYdtATxzo
Here, I want to use speech transcription with openai/whisper-large-v2 model using the pipeline. By using WhisperProcessor, we can set the language, but this has a disadvantage for longer audio files than 30 seconds. I used the below code and I can set the language here.
```
import torch
from transformers import WhisperForConditionalGeneration, WhisperProcessor
device = "cuda" if torch.cuda.is_available() else "cpu"
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2").to(device)
processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
inputs = processor.feature_extractor(speech_data, return_tensors="pt", sampling_rate=16_000).input_features.to(device)
generate_ids = model.generate(inputs, max_length=480_000, language="<|tr|>", task="transcribe", return_timestamps=True)
results = processor.tokenizer.decode(generate_ids[0], decode_with_timestamps=True, output_offsets=True)
```
Long audio files can be processed in the pipeline by setting chunk_length as below. But in the pipeline, I couldn't set the language. Therefore, I have gotten English results in my Turkish speech data.
```
from transformers import pipeline
MODEL_NAME = "openai/whisper-large-v2"
pipe = pipeline(
task="automatic-speech-recognition",
model=MODEL_NAME,
device='cpu')
pipe(speech_file, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0], batch_size=32)
```
Is there a way to set the language?
### System Info
docker image:
- pytorch/pytorch:1.13.1-cuda11.6-cudnn8-runtime
Transformers Version:
`transformers==v4.27dev`
### Who can help?
@sanchit-gandhi @Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
MODEL_NAME = "openai/whisper-large-v2"
pipe = pipeline(
task="automatic-speech-recognition",
model=MODEL_NAME,
device='cpu')
pipe(speech_file, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0], batch_size=32)
```
### Expected behavior
```
Label: "Bazı Türkçe kelimeler."
Prediction: "Some Turkish words."
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21809/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21808
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21808/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21808/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21808/events
|
https://github.com/huggingface/transformers/issues/21808
| 1,600,116,600
|
I_kwDOCUB6oc5fX9d4
| 21,808
|
Using Bloom with int8 generate unreadable outputs
|
{
"login": "SAI990323",
"id": 40531945,
"node_id": "MDQ6VXNlcjQwNTMxOTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/40531945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SAI990323",
"html_url": "https://github.com/SAI990323",
"followers_url": "https://api.github.com/users/SAI990323/followers",
"following_url": "https://api.github.com/users/SAI990323/following{/other_user}",
"gists_url": "https://api.github.com/users/SAI990323/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SAI990323/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SAI990323/subscriptions",
"organizations_url": "https://api.github.com/users/SAI990323/orgs",
"repos_url": "https://api.github.com/users/SAI990323/repos",
"events_url": "https://api.github.com/users/SAI990323/events{/privacy}",
"received_events_url": "https://api.github.com/users/SAI990323/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @younesbelkada ",
"the V100 series were not supported by `bitsandbytes` but now they should be compatible since the `0.37.0` relase. What is your `bitsandbytes` version? Can you try to update `bitsandbytes` ? `pip install --upgrade bitsandbytes`",
"> the V100 series were not supported by `bitsandbytes` but now they should be compatible since the `0.37.0` relase. What is your `bitsandbytes` version? Can you try to update `bitsandbytes` ? `pip install --upgrade bitsandbytes`\r\n\r\nI have used their latest version 0.37.0, and the int8 type of \"bloom-7b1\" seems work well on a single Tesla V100, albeit it have repetitions at the end of the outputs.",
"@SAI990323 \r\nAre you still facing the issue? Can you try an approach that is similar to: https://github.com/huggingface/transformers/issues/21987#issuecomment-1458231709 and let us know if this works?\r\nAlso make sure to use `bitsandbytes==0.37.1`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,683
| 1,683
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.27
- Python version: 3.9.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
When I use the int8 type of bloom to generate outputs on 8*Tesla V100(32GB), I find all of the tokens generated by the model are "unk". Are their any ideas to help me solve this problem?
This phenomenon doesn't appear in the bloom-7b1 model.
### Who can help?
@sgugger @muellerzr
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
My code is here.
`from transformers import AutoModelForCausalLM, AutoTokenizer`
`checkpoint = "model_path"`
`max_memory_mapping = {0: "25GB", 1: "25GB", 2: "25GB", 3: "25GB", 4: "25GB", 5: "25GB", 6: "25GB", 7: "25GB"}`
`tokenizer = AutoTokenizer.from_pretrained(checkpoint)`
`model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", max_memory=max_memory_mapping, load_in_8bit=True)`
`inputs = tokenizer.encode('''Hello ''', return_tensors="pt").to("cuda")`
`outputs = model.generate(inputs, max_new_tokens=10)`
`print(tokenizer.decode(outputs[0]))`
And the output is "Hello unk unk unk unk unk unk unk unk unk unk "
### Expected behavior
I expect the model outputs some meaningful results, such as "Hello, I am a young woman of 28 years old who has just arrived in New Braunfels for" from the API in the [https://huggingface.co/bigscience/bloom?text=Hello](url) or "Hello I am a newbie in python and I am" -- use the "bloom-7b1' model (int8) inference on a single Tesla V100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21808/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21807
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21807/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21807/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21807/events
|
https://github.com/huggingface/transformers/issues/21807
| 1,600,061,281
|
I_kwDOCUB6oc5fXv9h
| 21,807
|
Conversion of OWL-ViT model fails
|
{
"login": "alexey-chaykin",
"id": 15346047,
"node_id": "MDQ6VXNlcjE1MzQ2MDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/15346047?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexey-chaykin",
"html_url": "https://github.com/alexey-chaykin",
"followers_url": "https://api.github.com/users/alexey-chaykin/followers",
"following_url": "https://api.github.com/users/alexey-chaykin/following{/other_user}",
"gists_url": "https://api.github.com/users/alexey-chaykin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexey-chaykin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexey-chaykin/subscriptions",
"organizations_url": "https://api.github.com/users/alexey-chaykin/orgs",
"repos_url": "https://api.github.com/users/alexey-chaykin/repos",
"events_url": "https://api.github.com/users/alexey-chaykin/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexey-chaykin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"P.S. if that could help, when I'm changing ['optimizer']['target'] to ['params'], the script is failing with:\r\n```\r\nTraceback (most recent call last):\r\n File \"~/transformers/src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py\", line 411, in <module>\r\n pt_backbone_params, clip_pt, attn_params = convert_clip_backbone(flax_params, torch_config)\r\n File \"~/transformers/src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py\", line 281, in convert_clip_backbone\r\n flax_clip_params = flatten_nested_dict(flax_params[\"backbone\"][\"clip\"])\r\n File \"~/transformers/src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py\", line 85, in flatten_nested_dict\r\n if isinstance(v, collections.MutableMapping):\r\nAttributeError: module 'collections' has no attribute 'MutableMapping'\r\n```",
"You almost certainly will have to fork the Transformers library and adapt the conversion script a bit to make it work for your use case.\r\n\r\nIn this case, it seems that you're not properly reading the parameters of the model into a dictionary. I'd check which keys are in `checkpoints.restore_checkpoint(args.owlvit_checkpoint)` => you apparently already found that it should be 'params'. Next, you can check what's exactly in `flax_clip_params`, this should be in dictionary with key-value pairs of parameter names and their corresponding values.",
"Hi @alexey-chaykin, I can convert the official checkpoints but the training script was not available when I added OWL-ViT to transformers and their original script is probably a little different than the released one. You would need to find out what key the parameters are stored under and edit the conversion script.\r\n\r\nAs for the second error you're getting, I think it's just a version issue as `collections.MutableMapping` has been moved to `collections.abc.MutableMapping` in newer versions.",
"Thanks, Niels, Alara. Will do that way.\r\n\r\nDo you have any plans to implement pytorch training for HuggingFace OWL-ViT?",
"No problem @alexey-chaykin, we are planning to implement PyTorch training as the training code is released. We will probably be releasing a tutorial / blog post on it in the next few weeks :)",
"Thanks! Looking forward to that.",
"@alaradirik Any update on the tutorial/blog or the training code in PyTorch?",
"I'm re-opening the issue #20091 for adding the training code for Owl-Vit for anyone in the community to contribute if they're interested cc @rafaelpadilla "
] | 1,677
| 1,694
| 1,677
|
NONE
| null |
### System Info
I've trained OWL-ViT model on my data using [training code from original repo](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit#fine-tuning) and trying to use it in HuggingFace pytorch OWL-ViT implementation.
As far as I understand, I need to convert it using [convert_owlvit_original_flax_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py), first.
But, when I invoke:
`python3 convert_owlvit_original_flax_to_hf.py --owlvit_version clip_b32 --owlvit_checkpoint ~/scenic/training/checkpoint_16000 --hf_config vit_b32 --pytorch_dump_folder_path .`
it fails with:
```
Traceback (most recent call last):
File "~/transformers/src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py", line 406, in <module>
variables = checkpoints.restore_checkpoint(args.owlvit_checkpoint, target=None)["optimizer"]["target"]
KeyError: 'optimizer'
```
How to fix that?
P.S. the dict, returned by checkpoints.restore_checkpoint() has next keys: ['opt_state', 'params', 'global_step', 'model_state', 'rng', 'metadata'
### Who can help?
@amyeroberts @alaradirik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`python3 convert_owlvit_original_flax_to_hf.py --owlvit_version clip_b32 --owlvit_checkpoint ~/scenic/training/checkpoint_16000 --hf_config vit_b32 --pytorch_dump_folder_path .`
### Expected behavior
successful script running with converted model as output
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21807/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21806
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21806/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21806/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21806/events
|
https://github.com/huggingface/transformers/issues/21806
| 1,599,937,440
|
I_kwDOCUB6oc5fXRug
| 21,806
|
Tokenizer call function gives an error when using the "target_text "argument without using "text" argument.
|
{
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You version of Transformers is too low, you need to upgrade it as `target_text` is a somewhat recent feature.",
"Should I install tranformers from source ? Because I also tried PIP and\ndidn’t work.\n\nOn Mon, 27 Feb 2023 at 9:02 PM, Sylvain Gugger ***@***.***>\nwrote:\n\n> You version of Transformers is too low, you need to upgrade it as\n> target_text is a somewhat recent feature.\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/21806#issuecomment-1445870119>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGUREYYFIHEIQHKINCDWZRNSHANCNFSM6AAAAAAVIIOMAI>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n",
"This change was introduced in `transformers==v4.22.0`. Try `pip install --upgrade transformers` as `pip install transformers` will do nothing if you already have the library. ",
"it worked. Thanks"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### System Info
Name: transformers
Version: 4.21.2
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: transformers@huggingface.co
License: Apache
Location: /databricks/python3/lib/python3.9/site-packages
Requires: packaging, pyyaml, filelock, numpy, regex, tokenizers, tqdm, huggingface-hub, requests
@ArthurZucker
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. tokenizer_label = AutoTokenizer.from_pretrained(base_model)
2. labels = tokenizer_label(text_target=targets, padding=False, truncation=True)
### Expected behavior
TypeError: __call__() missing 1 required positional argument: 'text'
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21806/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21805
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21805/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21805/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21805/events
|
https://github.com/huggingface/transformers/issues/21805
| 1,599,914,964
|
I_kwDOCUB6oc5fXMPU
| 21,805
|
libssl.so.10: cannot open shared object file: No such file or directory
|
{
"login": "falconair",
"id": 365542,
"node_id": "MDQ6VXNlcjM2NTU0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/365542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/falconair",
"html_url": "https://github.com/falconair",
"followers_url": "https://api.github.com/users/falconair/followers",
"following_url": "https://api.github.com/users/falconair/following{/other_user}",
"gists_url": "https://api.github.com/users/falconair/gists{/gist_id}",
"starred_url": "https://api.github.com/users/falconair/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/falconair/subscriptions",
"organizations_url": "https://api.github.com/users/falconair/orgs",
"repos_url": "https://api.github.com/users/falconair/repos",
"events_url": "https://api.github.com/users/falconair/events{/privacy}",
"received_events_url": "https://api.github.com/users/falconair/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is not a library used by Transformers per se but Python. There is something wrong with your Python install via Conda, Python installed like this does not find the libssl.so.10 library.",
"I meet the exact same issue here while `pip` install cannot solve the problem.",
"tl;dr; `conda update tokenizers` solved the problem for me.\r\n\r\n---\r\n\r\nI think I had the same problem and this is how I solved it.\r\n\r\nI noticed that the error was related to the `Tokenizers` package:\r\n\r\n```\r\nfrom .tokenizers import (\r\nImportError: /lib/x86_64-linux-gnu/libssl.so.10: version `libssl.so.10' not found (required by /home/silas/miniconda3/envs/llama/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-x86_64-linux-gnu.so)\r\n```\r\n\r\nSo I decided to check who was providing this library and if I was using the latest version. [PyPi](https://pypi.org/project/tokenizers/) shows that the latest version is 0.13.02 and the library is by Hugging Face (so we are in the right place LOL).\r\n\r\nAfter running `conda list`, I saw that I was using version 0.13.0.dev0. So I checked [Conda-Forge](https://anaconda.org/conda-forge/tokenizers) and found that they had the new version. Then I ran `conda update tokenizers` and that solved the problem for me.\r\n\r\nI hope that solves the problem for you. =)",
"I have the exact same issues after I used conda to install transformers. Pip is working fine, however.",
"\r\n\r\n\r\n> \r\n\r\nMy tokenizer version is 0.13.0.dev0, but conda update tokenizers doesn't work for me. I also tried conda install -c conda-forge tokenizers on [Conda-Forge](https://anaconda.org/conda-forge/tokenizers), it doesn't work either. How can I update the tokenizers version?\r\n",
"@lilyq I had the same issue. I uninstalled transformers/tokenizers first and then pip reinstalled from source using `pip install git+https://github.com/huggingface/transformers` (all within my conda env). This installed the right version of tokenizers as a dependency and now it works. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> \r\n\r\nconda update tokenizers worked great for me, thank you",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,689
| 1,689
|
NONE
| null |
### System Info
I am setting up a brand new machine with Ubuntu 22.04, pytorch 1.13.1/pytorch-cuda 11.7 and transformers 4.24.0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I installed transformers using the following command, as suggested by huggingface docs:
`conda install -c huggingface transformers --y`
I'm running the following command: `from transformers import pipeline`
I'm getting the following exception:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1075 try:
-> 1076 return importlib.import_module("." + module_name, self.__name__)
1077 except Exception as e:
~/anaconda3/lib/python3.9/importlib/__init__.py in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _gcd_import(name, package, level)
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _find_and_load(name, import_)
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _load_unlocked(spec)
~/anaconda3/lib/python3.9/importlib/_bootstrap_external.py in exec_module(self, module)
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
~/anaconda3/lib/python3.9/site-packages/transformers/pipelines/__init__.py in <module>
32 from ..feature_extraction_utils import PreTrainedFeatureExtractor
---> 33 from ..models.auto.configuration_auto import AutoConfig
34 from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor
~/anaconda3/lib/python3.9/site-packages/transformers/models/__init__.py in <module>
18
---> 19 from . import (
20 albert,
~/anaconda3/lib/python3.9/site-packages/transformers/models/mt5/__init__.py in <module>
39 if is_tokenizers_available():
---> 40 from ..t5.tokenization_t5_fast import T5TokenizerFast
41 else:
~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/tokenization_t5_fast.py in <module>
22
---> 23 from ...tokenization_utils_fast import PreTrainedTokenizerFast
24 from ...utils import is_sentencepiece_available, logging
~/anaconda3/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py in <module>
24
---> 25 import tokenizers.pre_tokenizers as pre_tokenizers_fast
26 from tokenizers import Encoding as EncodingFast
~/anaconda3/lib/python3.9/site-packages/tokenizers/__init__.py in <module>
78
---> 79 from .tokenizers import (
80 Tokenizer,
ImportError: libssl.so.10: cannot open shared object file: No such file or directory
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_121111/4287807559.py in <module>
----> 1 from transformers import pipeline
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive)
~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py in __getattr__(self, name)
1064 value = self._get_module(name)
1065 elif name in self._class_to_module.keys():
-> 1066 module = self._get_module(self._class_to_module[name])
1067 value = getattr(module, name)
1068 else:
~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1076 return importlib.import_module("." + module_name, self.__name__)
1077 except Exception as e:
-> 1078 raise RuntimeError(
1079 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1080 f" traceback):\n{e}"
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
libssl.so.10: cannot open shared object file: No such file or directory
```
### Expected behavior
Please note that I'm running the official install instructions on a brand new machine!
There are two other tickets with the same issue:
https://github.com/huggingface/transformers/issues/18549
https://github.com/huggingface/transformers/issues/19844
Both are closed because the user simply switched to using pip. But the problem remains with conda installs.
This error also resolves for me if I use `pip install transformers --force-reinstall`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21805/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21804
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21804/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21804/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21804/events
|
https://github.com/huggingface/transformers/pull/21804
| 1,599,869,533
|
PR_kwDOCUB6oc5Kw7bE
| 21,804
|
introduce `logger.warning_once` and use it for grad checkpointing code
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR:
1. introduces a new `warning_once` logger method - to prevent repeating the same warning more than once
2. use it for gradient_checkpointing functionality - where one would get thousands of these warnings at the moment should they have `use_cache==True` - the other solution is to assert
(I did this for m4, so thought to sync here as well)
The rename was done automatically with:
```
perl -0777 -pi -e 's|(logger.warning)(\(\W+\S\Suse_cache=True)|logger.warning_once$2|msg' src/transformers/models/*/mode*py
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21804/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21804/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21804",
"html_url": "https://github.com/huggingface/transformers/pull/21804",
"diff_url": "https://github.com/huggingface/transformers/pull/21804.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21804.patch",
"merged_at": 1677533106000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21803
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21803/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21803/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21803/events
|
https://github.com/huggingface/transformers/issues/21803
| 1,599,813,196
|
I_kwDOCUB6oc5fWzZM
| 21,803
|
Masking ratio incorrect for DataCollatorForLanguageModeling
|
{
"login": "anruijian",
"id": 115125339,
"node_id": "U_kgDOBtysWw",
"avatar_url": "https://avatars.githubusercontent.com/u/115125339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anruijian",
"html_url": "https://github.com/anruijian",
"followers_url": "https://api.github.com/users/anruijian/followers",
"following_url": "https://api.github.com/users/anruijian/following{/other_user}",
"gists_url": "https://api.github.com/users/anruijian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anruijian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anruijian/subscriptions",
"organizations_url": "https://api.github.com/users/anruijian/orgs",
"repos_url": "https://api.github.com/users/anruijian/repos",
"events_url": "https://api.github.com/users/anruijian/events{/privacy}",
"received_events_url": "https://api.github.com/users/anruijian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Note that the Transformers library is primarily a library of models, not data collators. This particular data collator should never have been added to the library proper but only in the example that uses it (it's also quite buggy and only works for BERT models). We welcome a PR with bug fixes (for the TF one apparently) but won't add more functionality to it.",
"@sgugger Thanks for the clarification. I will submit a PR for fixing the bug. ",
"@sgugger I have created a PR #21834 to fix the typo! Thank you!"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.0
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
It seems that there is an error in the implementation of the [torch_mask_tokens()](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/data/data_collator.py#L750) method in the [DataCollatorForLanguageModeling](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/data/data_collator.py#L609) class.
According to the documentation, 80% of tokens are masked, 10% being replaced with the original tokens and 10% with random tokens. <s>However, the implementation for replacing tokens with random ones sets the probability at 50% instead of the intended 10%.</s>
``` python
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random]
```
EDIT: After a second look, it seems the logic here is that 80% tokens are masked, out of the remaining 20% tokens, half of them need to be replaced by random tokens. So the probability 0.5 is correct. However, for [tf_mask_tokens()](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/data/data_collator.py#L661), the probability is 0.1, which seems incorrect. Let me know if I understand it correctly!
``` python
indices_random = self.tf_bernoulli(input_shape, 0.1) & masked_indices & ~indices_replaced
random_words = tf.random.uniform(input_shape, maxval=vocab_size, dtype=tf.int64)
inputs = tf.where(indices_random, random_words, inputs)
```
See the [link](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/data/data_collator.py#L776) here.
Furthermore, I believe that the ratios for masked/original/random tokens should be configurable parameters that are accessible to users. At present, I have to inherit the `DataCollatorForLanguageModeling` class and override the `torch_mask_tokens` function in order to modify the ratio.
I can submit a pull request to address the bug and update the ratio parameter. Thank you very much!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21803/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21802
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21802/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21802/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21802/events
|
https://github.com/huggingface/transformers/pull/21802
| 1,599,784,183
|
PR_kwDOCUB6oc5KwqhR
| 21,802
|
Add BLIP and BLIP-2 to image-to-text pipeline
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh Are these tested automatically now ? Or should we add something to make sure they are tested too ?",
"Hey @Narsil !\r\n\r\nMy PR #21516 is not merged yet, and on `main` it is still using `metaclass`, so the pipeline tests (in theory) are generated on the fly (just as you did before).\r\n\r\nBut the problem is that we don't generate the tiny models offline yet for newly add models in `transformers`, and therefore those model classes are not tested in pipeline testing on current `main`.\r\n\r\nI plan to re-run the tiny model generation ASAP. If it's urgent for this PR, I can do it!",
"@NielsRogge Do you mind adding a tiny model and a small test in the `tests/pipelines/` directly for this PR maybe ? (The PR looks good, I'd just like to make sure it's not breaking on small models within tests if possible).",
"FYI: I tried to create tiny models for `blip` and `blip-2` using the existing script, but they both failed to create.\r\n\r\n- `blip-2`: there is no `Blip2ModelTest` class\r\n - (there is `Blip2ForConditionalGenerationTest`, as well as `Blip2VisionModelTest`)\r\n - but the creation script needs to know from `Blip2ModelTest`\r\n - ~~(I am not really in favor to further complicating the creation script - it's super complex already)~~\r\n - it's better if we can manage to have some naming convention in modeling test files\r\n - (I agree that the current test names in `blip-2` make sense however)\r\n\r\n- `blip`: processor fails to be created (due to `feature_extractor` attribute)\r\n - I will try to fix this and create tiny model for `blip`\r\n",
"I will try to work on enhancing the script. But if you somehow manage to create them manually (model/tokenizer/processor), go ahead.",
"Close this one as the task is completed in #21904\r\n\r\nThank you @NielsRogge for take the initiative."
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds BLIP and BLIP-2 to the image-to-text pipeline.
Usage is as follows:
```
from transformers import pipeline
from transformers import AutoProcessor, BlipForConditionalGeneration, Blip2ForConditionalGeneration
# BLIPv1
# processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
# model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")
# BLIPv2
processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
pipe = pipeline("image-to-text", model=model, image_processor=processor.image_processor, tokenizer=processor.tokenizer)
print(pipe("http://images.cocodataset.org/val2017/000000039769.jpg"))
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21802/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21802",
"html_url": "https://github.com/huggingface/transformers/pull/21802",
"diff_url": "https://github.com/huggingface/transformers/pull/21802.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21802.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21801
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21801/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21801/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21801/events
|
https://github.com/huggingface/transformers/issues/21801
| 1,599,750,737
|
I_kwDOCUB6oc5fWkJR
| 21,801
|
Adding additional terms to the Transformers glossary
|
{
"login": "MichaelRipa",
"id": 51883134,
"node_id": "MDQ6VXNlcjUxODgzMTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/51883134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichaelRipa",
"html_url": "https://github.com/MichaelRipa",
"followers_url": "https://api.github.com/users/MichaelRipa/followers",
"following_url": "https://api.github.com/users/MichaelRipa/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelRipa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichaelRipa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelRipa/subscriptions",
"organizations_url": "https://api.github.com/users/MichaelRipa/orgs",
"repos_url": "https://api.github.com/users/MichaelRipa/repos",
"events_url": "https://api.github.com/users/MichaelRipa/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichaelRipa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @MKhalusova and @stevhliu ",
"Thanks for the great suggestions, and for adding links to other docs for some of the definitions. Feel free to open a PR for these new definitions/changes! 😄 \r\n\r\n> encoder, decoder and encoder/decoder\r\n\r\nI think for encoder and decoder, we could combine those with the existing definitions for autoencoding/autoregressive, instead of having two separate definitions that basically explain the same thing. The summary of the models [guide](https://huggingface.co/docs/transformers/main/en/model_summary) has actually been recently updated such that you can't really link to it.\r\n\r\n> Finally, it might be worth putting acronyms beside glossary terms\r\n\r\nGreat idea!",
"Excellent, I will get started on this over the weekend, thanks for the feedback! 🙂\r\n\r\n> I think for encoder and decoder, we could combine those with the existing definitions for autoencoding/autoregressive, instead of having two separate definitions that basically explain the same thing.\r\n\r\nJust to be clear here, would this be renaming existing autoencoding/autoregressive entries as encoder/decoder & mentioning that autoencoding/autoregressive are synonyms?\r\n\r\n",
"Awesome, looking forward to your contribution! 🤗\r\n\r\n> Just to be clear here, would this be renaming existing autoencoding/autoregressive entries as encoder/decoder & mentioning that autoencoding/autoregressive are synonyms?\r\n\r\nYeah, I think having entries for encoder/decoder would be better than autoencoding/autoregressive.",
"Thanks for your help! 🙌 I've got a draft complete locally with the proposed changes and will make a PR in the next day or so. I'll close this issue for the time being though! "
] | 1,677
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### Feature request
Adding definitions to the [Transformers glossary](https://huggingface.co/docs/transformers/glossary) for each of the following terms:
- **encoder, decoder and encoder/decoder:** You already have definitions for autoencoding and autoregressive models but it's not immediately clear from the glossary that those are synonymous to encoder and decoder. Could point to relevant sections of the [Summary of the models](https://huggingface.co/docs/transformers/model_summary) article.
- **finetuned model:** There already is a glossary term for pretrained model, can link to the [Fine-tune a pretrained model](https://huggingface.co/docs/transformers/training) article
- **inference**
- **pipeline:** Could link to the [Pipelines for inference](https://huggingface.co/docs/transformers/pipeline_tutorial) article
- **preprocessing:** Can link to [the preprocess document](https://huggingface.co/docs/transformers/preprocessing)
- **supervised and unsupervised learning**
A few other definitions which might be worth defining (even if they don't show up as much in your documentation) is **representation learning**, **semi-supervised learning**, **feature extraction**, **Large Language Models (LLM)** (mainly because it is such a popular term now) and **transfer learning**.
Finally, it might be worth putting acronyms beside glossary terms like **natural language processing/understanding** and **recurrent neural networks** (i.e. NLP/U and RNN) both for brevity and because they are so commonly used.
### Motivation
The glossary page has been quite helpful for me in understanding certain overloaded terms in deep learning, and I feel that adding these terms would be beneficial to others. It also could help link people to useful articles as many of the above terms have been explained already in one of your articles which helps with keeping things organized.
### Your contribution
I would be happy to help with coming up with the definitions and submitting a PR with the added changes 🙂
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21801/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21801/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21800
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21800/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21800/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21800/events
|
https://github.com/huggingface/transformers/pull/21800
| 1,599,661,921
|
PR_kwDOCUB6oc5KwSoK
| 21,800
|
[deepspeed] check whether model is NLP one instead of counting on input type
|
{
"login": "izapolsk",
"id": 21039333,
"node_id": "MDQ6VXNlcjIxMDM5MzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/21039333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/izapolsk",
"html_url": "https://github.com/izapolsk",
"followers_url": "https://api.github.com/users/izapolsk/followers",
"following_url": "https://api.github.com/users/izapolsk/following{/other_user}",
"gists_url": "https://api.github.com/users/izapolsk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/izapolsk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izapolsk/subscriptions",
"organizations_url": "https://api.github.com/users/izapolsk/orgs",
"repos_url": "https://api.github.com/users/izapolsk/repos",
"events_url": "https://api.github.com/users/izapolsk/events{/privacy}",
"received_events_url": "https://api.github.com/users/izapolsk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@izapolsk, I think the fix is much simpler. Just check if it's any type of `int` - I didn't think anybody would use a different int type than int64 when I coded this, so as new use cases come in, we can just adapt for it. \r\n\r\n```\r\n if self.deepspeed and data.dtype not in [torch.in32, torch.int64]:\r\n # NLP models inputs are int32 or int64 and those get adjusted to the right dtype of the\r\n```\r\n\r\nIf it resonates and works you can go ahead and apply that fix, or I can do it as well. It'd be easier for you since you have an application you can already test with.\r\n\r\nand if this will be the way, I don't think we need any additional tests.",
"Actually, I wonder why were we using int64 in the first place when vocabs are so small. int32 should work always and for smaller vocabs even int16 should be enough (max `32767`). \r\n\r\nProbably since there is very little saving in using a more compact dtype as inputs, if they are tokenized on the fly are very short.",
"@stas00, done.\r\nI added more sophisticated check based on checking first layer because vocab could be int16 - 64 and there could be other non NLP models I'm not aware of having int input. \r\n\r\nThank you for reviewing this PR.",
"Let me re-run the offline tests first",
"Nope, the tests were failing. The logic was incorrect. I pushed the fix.\r\n\r\n@izapolsk, please check that with my fix it still works for you and then we can merge it.\r\n\r\nThank you.\r\n\r\np.s. Also since it seems that this is not the last of your deepspeed improvements, here is how you can test that your future PRs work:\r\n\r\n```\r\nRUN_SLOW=1 pytest tests/deepspeed\r\n```\r\n\r\nthis is because CircleCI has no GPUs, so we only run those tests requiring gpus on a different CI nightly. ",
"Oh good catch, missed that not. We'll need a quick rebase on main to get the quality job passing if possible (the failure seen here is fixed on main).",
"good catch, my bad, sorry. I'll do"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR intends to fix an issue when training of NLP model fails if input dtype isn't int64.
My dataset had dtype = int32. Everything was ok until I decided to add deepspeed.
It turned out that trainer relies on dtype and does input data convertion into hf_deepspeed_config.dtype if it isn't int64.
I guess it has to check whether first layer isn't Embedding instead.
I think this PR also needs tests but I need an advice on how we can cover this case.
@stas00 could you be so kind and review this PR and give an advice on whether tests are necessary and their implementation ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21800/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21800",
"html_url": "https://github.com/huggingface/transformers/pull/21800",
"diff_url": "https://github.com/huggingface/transformers/pull/21800.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21800.patch",
"merged_at": 1677674496000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21799
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21799/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21799/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21799/events
|
https://github.com/huggingface/transformers/pull/21799
| 1,599,560,969
|
PR_kwDOCUB6oc5Kv98z
| 21,799
|
Fix en documentation typos
|
{
"login": "tpaviot",
"id": 660130,
"node_id": "MDQ6VXNlcjY2MDEzMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/660130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tpaviot",
"html_url": "https://github.com/tpaviot",
"followers_url": "https://api.github.com/users/tpaviot/followers",
"following_url": "https://api.github.com/users/tpaviot/following{/other_user}",
"gists_url": "https://api.github.com/users/tpaviot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tpaviot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tpaviot/subscriptions",
"organizations_url": "https://api.github.com/users/tpaviot/orgs",
"repos_url": "https://api.github.com/users/tpaviot/repos",
"events_url": "https://api.github.com/users/tpaviot/events{/privacy}",
"received_events_url": "https://api.github.com/users/tpaviot/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix a wrong URL as well as typos
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21799/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21799",
"html_url": "https://github.com/huggingface/transformers/pull/21799",
"diff_url": "https://github.com/huggingface/transformers/pull/21799.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21799.patch",
"merged_at": 1677483396000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21798
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21798/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21798/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21798/events
|
https://github.com/huggingface/transformers/pull/21798
| 1,599,528,564
|
PR_kwDOCUB6oc5Kv3f8
| 21,798
|
Fix resume_from_checkpoint for deepspeed
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"ok, looks like we figured out the original so closing this one."
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
something is borked with CircleCI when a contributor has a CircleCI account that isn't set up to some requirements - we don't know what - so I re-created the PR from the original https://github.com/huggingface/transformers/pull/21735
------------------
This PR overcomes a possible issue with the using deepspeed resume when a non-deepspeed checkpoint file structure isn't there.
The original code comes from @mosheber and I had to apply a few more adjustments for tests to work after this change. The tests had to be run manually since they require gpus.
Credits to contributor's work have been correctly imported into this new PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21798/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21798",
"html_url": "https://github.com/huggingface/transformers/pull/21798",
"diff_url": "https://github.com/huggingface/transformers/pull/21798.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21798.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21797
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21797/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21797/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21797/events
|
https://github.com/huggingface/transformers/issues/21797
| 1,599,415,554
|
I_kwDOCUB6oc5fVSUC
| 21,797
|
How to prune a transformer?
|
{
"login": "jyotiyadav94",
"id": 72126242,
"node_id": "MDQ6VXNlcjcyMTI2MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/72126242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jyotiyadav94",
"html_url": "https://github.com/jyotiyadav94",
"followers_url": "https://api.github.com/users/jyotiyadav94/followers",
"following_url": "https://api.github.com/users/jyotiyadav94/following{/other_user}",
"gists_url": "https://api.github.com/users/jyotiyadav94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jyotiyadav94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jyotiyadav94/subscriptions",
"organizations_url": "https://api.github.com/users/jyotiyadav94/orgs",
"repos_url": "https://api.github.com/users/jyotiyadav94/repos",
"events_url": "https://api.github.com/users/jyotiyadav94/events{/privacy}",
"received_events_url": "https://api.github.com/users/jyotiyadav94/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please do not tag so many people, especially for an issue which is linked to the optimum repo (where you found this tutorial) and not the Transformers one.",
"okay Thank you so much for the suggestions I will remove from the task. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,680
| 1,680
|
NONE
| null |
### System Info
Hi, I am trying to reduce memory and speed up my own fine-tuned transformer. I came across the [tutorial ](https://huggingface.co/docs/optimum/intel/optimization_inc) for pruning on the huggingface site. I am referring to the following snippet. The trainer.train() is missing, so I added it. It ran without error, however, there is no reduction in memory (I used model.get_memory_footprint() and before and after pruning it was Model memory footprint: 503695916 bytes). Same for inference speed. I also tried out different pruning configurations (global pruning, different pruning types or target sparsities) but it did not help. Can someone help me?
```
from optimum.intel.neural_compressor import INCTrainer
from neural_compressor import WeightPruningConfig
from transformers import TrainingArguments, Trainer
from transformers.data.data_collator import default_data_collator
pruning_config = WeightPruningConfig(
pruning_type="magnitude",
start_step=0,
end_step=15,
target_sparsity=0.2,
pruning_scope="local",
)
from transformers import TrainingArguments, Trainer
save_dir="prunedModel"
trainer = INCTrainer(
model=model,
pruning_config=pruning_config,
args=TrainingArguments(save_dir, max_steps=500,num_train_epochs=1.0, do_train=True, do_eval=True,metric_for_best_model="f1",greater_is_better=True),
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics,
tokenizer=processor,
data_collator=default_data_collator,
)
train_result = trainer.train() # <-- Added by me
trainer.save_model(save_dir) # <-- Added by me
optimized_model = AutoModelForSequenceClassification.from_pretrained(save_dir)
memory_footprint = optimized_model.get_memory_footprint()
print(f"Model memory footprint: {memory_footprint} bytes")
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Expected behavior
As per the model should be pruned and the actual model without pruned and the pruned model should have different sizes but they have the Model memory footprint:
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21797/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21796
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21796/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21796/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21796/events
|
https://github.com/huggingface/transformers/issues/21796
| 1,599,327,160
|
I_kwDOCUB6oc5fU8u4
| 21,796
|
LLaMA
|
{
"login": "michaelroyzen",
"id": 45830328,
"node_id": "MDQ6VXNlcjQ1ODMwMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/45830328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelroyzen",
"html_url": "https://github.com/michaelroyzen",
"followers_url": "https://api.github.com/users/michaelroyzen/followers",
"following_url": "https://api.github.com/users/michaelroyzen/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelroyzen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelroyzen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelroyzen/subscriptions",
"organizations_url": "https://api.github.com/users/michaelroyzen/orgs",
"repos_url": "https://api.github.com/users/michaelroyzen/repos",
"events_url": "https://api.github.com/users/michaelroyzen/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelroyzen/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hello, @michaelroyzen , I want to work on this issue, can you please clarify this:-\r\n1. The objective of this issue is to add the Llama model to the 🤗 models section right ?\r\nThe inference code for the Llama models is open sourced and weights and tokenizers are available as you mentioned. \r\nI can try to work on this issue, Please let me know if this issue is open for working and should I proceed or not. ",
"Hello @sayantan1410. At this moment the code for inference is available, but to get the weights you need to fill out the request form from their github. It'd be great for you to work on this, but it would require doing so with a hypothethical set of weights, given that they have not started actually releasing weights to people who asked for it just yet.",
"Hello @Eric-Wallace-WebHost , I have actually filled up the form for the weights and the tokenizers but since I don't have any related publications so probably, I will not get that. But for now, I will try to work with some hypothetical weights until the weights are released !",
"Also will there be a Jax implementation? It would be super helpful. I can help contribute to it as well",
"I can contribute as well for the Jax implementation! Also I'm not sure if we can just use their pytorch code, since it is released under GPLv3 instead of the Apache License of transformers.",
"I have the weights. Haven't checked out the rules and I'm gonna assume I can't share it, but if you guys have an implementation I would love to help by testing it out.",
"At this stage we don't know if there is going to be an implementation in Transformers due to:\r\n- inaccessibility of weights (no one who got them is allowed to share them on the Hub)\r\n- different license of the code\r\n\r\nWe are looking if the Meta folks would be happy to release the weights in a gated repo on the Hub and if the code will be in Transformers or just put as code on the Hub because of the license. @thomasw21 is working on a PyTorch port that our research team will use in any case.\r\n\r\nSo stay tuned!",
"> At this stage we don't know if there is going to be an implementation in Transformers due to:\r\n> * inaccessibility of weights (no one who got them is allowed to share them on the Hub)\r\n\r\nEven if there is no permission to have the weights on the hub, usually transformers models are released with the conversion scripts done for the conversion. Even an implementation combined with the needed conversion script can be useful, because then researchers can convert the model to HF if needed and still use it within their HF based projects without having to reinvent the wheel.",
"+1 to henk717. Would be super useful even if there was just a way to plug in your own weights and use the existing transformers library!",
"It looks like the weights are right here.\r\n\r\nhttps://huggingface.co/nyanko7/LLaMA-7B\r\nhttps://huggingface.co/ricecake/LLaMA/tree/main\r\nhttps://huggingface.co/datasets/nyanko7/LLaMA-65B\r\n\r\nLicense is here:\r\n\r\nhttps://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform",
"Working on this today!",
"Are weights actually copyrightable? Technically, they are just a list of numbers generated by a machine and hence don't fall under US copyright laws. \r\n\r\nI say, just upload the weights and call Meta's bluff.",
"> Are weights actually copyrightable? Technically, they are just a list of numbers generated by a machine and hence don't fall under US copyright laws.\r\n> \r\n> I say, just upload the weights and call Meta's bluff.\r\n\r\nlots of people are way ahead of you on this.",
"Can someone make an ONNX version? I tried to convert it but I ran out of RAM.\r\n\r\nI would quite like to try it with Onnxruntime. Even though I think this uses far more VRAM than using torch. Also onnxruntime has a memory leak with external weight files. But still...",
"I'm interested in fine-tuning LLaMa for creating text embeddings, anyone have any tips for how to do it with the LLaMa architecture? Can I just add a pooling layer at the end?\n\nhttps://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllama\n\nHere's code for RLHF training btw",
"I have a working Jax implementation [here](https://github.com/Sea-Snell/JAX_llama)"
] | 1,677
| 1,679
| 1,679
|
NONE
| null |
### Model description
New model series from Facebook (7B, 33B, 66B) that is broadly competitive with Flan-PALM-540B.
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21796/reactions",
"total_count": 48,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 21,
"rocket": 27,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21796/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21795
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21795/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21795/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21795/events
|
https://github.com/huggingface/transformers/pull/21795
| 1,599,156,913
|
PR_kwDOCUB6oc5Kunbl
| 21,795
|
Fix page counting in Slack CI report script.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21795). All of your documentation changes will be reflected on that endpoint.",
"My self doubt sprit and intuition led me to look the issue again, and it turned out the issue was coming from the GitHub API call rate limit was reached as we didn't use a token when making these calls. PR #21823 was opened and merged.\r\n\r\nThe current way of page counting was good - the first page could be `0`, `1` or without page number. The next one would be `2`. My math capability was reduced a lot, especially when I tried to quickly fix things on Friday."
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
We started having an issue getting slack report for our CI. The error shown at the end indicates there is an issue to get all the job links for a workflow run. I take a look and find a change is necessary.
I am not sure why it was that way, especially for `i+2` part. But I remembered it has to be that to avoid duplicated pages. Maybe GitHub Actions change their API now and causes the issue.
```bash
"url": f"{github_actions_job_links['Extract warnings in CI artifacts']}",
KeyError: 'Extract warnings in CI artifacts'
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21795/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21795",
"html_url": "https://github.com/huggingface/transformers/pull/21795",
"diff_url": "https://github.com/huggingface/transformers/pull/21795.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21795.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21794
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21794/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21794/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21794/events
|
https://github.com/huggingface/transformers/pull/21794
| 1,599,107,732
|
PR_kwDOCUB6oc5KudE7
| 21,794
|
[GPTJ] Fix gradient checkpointing bug
|
{
"login": "krypticmouse",
"id": 43719685,
"node_id": "MDQ6VXNlcjQzNzE5Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/43719685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krypticmouse",
"html_url": "https://github.com/krypticmouse",
"followers_url": "https://api.github.com/users/krypticmouse/followers",
"following_url": "https://api.github.com/users/krypticmouse/following{/other_user}",
"gists_url": "https://api.github.com/users/krypticmouse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krypticmouse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krypticmouse/subscriptions",
"organizations_url": "https://api.github.com/users/krypticmouse/orgs",
"repos_url": "https://api.github.com/users/krypticmouse/repos",
"events_url": "https://api.github.com/users/krypticmouse/events{/privacy}",
"received_events_url": "https://api.github.com/users/krypticmouse/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger @amyeroberts As mentioned I did the following:-\r\n\r\n> Thanks for your PR! You need to remove the other one below (line 660).\r\n\r\nHowever won't that cause an issue with function declaration and it's corresponding else blocks? I saw other implementations of this fix and they don't remove the block below just add it again above. What do you think?",
"Hi @krypticmouse - thanks for your question and applying the update @sgugger requested. \r\n\r\nLooking at the diff again, L654 shouldn't be removed. I believe this should resolve the function declaration issue you mentioned. ",
"You will also need to resolved the conflict as `logger.warning` has been renamed to `logger.warning_once` since you opened your PR.",
"Is this ok to be merged now?"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue #21737
cc @sgugger @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21794/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21794",
"html_url": "https://github.com/huggingface/transformers/pull/21794",
"diff_url": "https://github.com/huggingface/transformers/pull/21794.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21794.patch",
"merged_at": 1677597163000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21793
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21793/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21793/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21793/events
|
https://github.com/huggingface/transformers/pull/21793
| 1,599,100,331
|
PR_kwDOCUB6oc5Kubhq
| 21,793
|
check for None forced tokens
|
{
"login": "andyehrenberg",
"id": 32784181,
"node_id": "MDQ6VXNlcjMyNzg0MTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/32784181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyehrenberg",
"html_url": "https://github.com/andyehrenberg",
"followers_url": "https://api.github.com/users/andyehrenberg/followers",
"following_url": "https://api.github.com/users/andyehrenberg/following{/other_user}",
"gists_url": "https://api.github.com/users/andyehrenberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andyehrenberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyehrenberg/subscriptions",
"organizations_url": "https://api.github.com/users/andyehrenberg/orgs",
"repos_url": "https://api.github.com/users/andyehrenberg/repos",
"events_url": "https://api.github.com/users/andyehrenberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/andyehrenberg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #21791
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21793/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21793",
"html_url": "https://github.com/huggingface/transformers/pull/21793",
"diff_url": "https://github.com/huggingface/transformers/pull/21793.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21793.patch",
"merged_at": 1677587084000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21792
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21792/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21792/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21792/events
|
https://github.com/huggingface/transformers/pull/21792
| 1,598,951,836
|
PR_kwDOCUB6oc5Kt7uQ
| 21,792
|
Improve TF weight loading, especially PT crossloading
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This is ready for review now! It's not that big, but it changes a lot of things:\r\n\r\n- Loading a PT model in TF with `from_pt=True` now supports `load_weight_prefix`\r\n- Loading a sharded TF checkpoint now supports `load_weight_prefix`\r\n- Sharded TF checkpoints can now be loaded even when not all weights match (this allows model surgery that didn't work with sharded models before!)\r\n- TF `from_pretrained` now supports a `tf_to_pt_weight_rename` kwarg. This should be a callable function which converts TF weight names to PT weight names for that model.\r\n- Composite classes like `TFEncoderDecoder` and `TFVisionEncoderDecoder` have been refactored to use the `tf_to_pt_weight_rename` kwarg, which let me remove all the ingenious workarounds that @ydshieh needed when he added those classes.\r\n- I found a few small issues in other classes when I was testing this PR and fixed them. This is mostly just stuff like removing unused args and disabling TF32 in tests so that the outputs match.\r\n- Add tests for the new features to `test_modeling_tf_common`\r\n\r\ncc:\r\n@ArthurZucker because I touched your sharded weight loading code\r\n@ydshieh because I touched your composite model code \r\n@gante for TF review \r\n@sgugger as repository overlord"
] | 1,677
| 1,677
| 1,677
|
MEMBER
| null |
Draft PR for now, this will probably break a bunch of stuff until I get it all working!
- [X] Support `from_pt` and `load_weight_prefix` at the same time
- [x] Replace hacky loading code in models that was written to get around this issue
- [x] ~Test and possibly replace code paths that name submodules based on `cls.load_weight_prefix` - I think this is very risky~
- [x] Support `load_weight_prefix` in the `load_sharded` functions as well
- [x] ~Check the `cls._requires_load_weight_prefix` paths and see if there's a better solution~
- [x] Update any affected tests
- [x] Add test for `load_sharded` with `load_weight_prefix`
Classes using `load_weight_prefix` that may need updating:
- [x] BART
- [x] EncoderDeoder
- [x] VisionEncoderDecoder
- [x] RAG
- [x] Blenderbot
- [x] T5
- [x] LED
- [x] BART
- [x] mBART
- [x] Marian
- [x] OPT
- [x] Pegasus
- [x] The cookiecutter template
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21792/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21792",
"html_url": "https://github.com/huggingface/transformers/pull/21792",
"diff_url": "https://github.com/huggingface/transformers/pull/21792.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21792.patch",
"merged_at": 1677609694000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21791
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21791/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21791/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21791/events
|
https://github.com/huggingface/transformers/issues/21791
| 1,598,928,197
|
I_kwDOCUB6oc5fTbVF
| 21,791
|
Flax Whisper predicts erroneous exclamation mark
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I think the expected behavior should be returning:\r\n\r\n`['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mischekvilder is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']`\r\n\r\nThe generation config has [[1, None], ...] for forced decoder ids, so the language is predicted by the model, and for some reason it's sampling \"!\". Maybe we should have a processor for only sampling from the language tokens for that first step when it is supposed to predict the language.",
"Indeed - that's correct we should be seeing the language token predicted at the second index!\r\n\r\nJust checked and `!` is the zero-th token in the tokenizer vocab -> maybe somethings going astray with the forced tokens logits processor when a `None` is passed as the forced token?",
"I'm seeing the problem - `force_token_array.at[index].set(token)` when `token` is `None` sets the value at `index` to 0. We should just make it so when `token` is None, we keep the value at that index at -1.",
"So should update FlaxForceTokensLogitsProcessor to:\r\n\r\n```\r\ndef __init__(self, force_token_map):\r\n force_token_map = dict(force_token_map)\r\n # Converts the dictionary of format {index: token} containing the tokens to be forced to an array, where the\r\n # index of the array corresponds to the index of the token to be forced, for XLA compatibility.\r\n # Indexes without forced tokens will have a negative value.\r\n force_token_array = jnp.ones((max(force_token_map.keys()) + 1), dtype=jnp.int32) * -1\r\n for index, token in force_token_map.items():\r\n if token is not None:\r\n force_token_array = force_token_array.at[index].set(token)\r\n self.force_token_array = jnp.int32(force_token_array)\r\n```",
"Also, just keep in mind that `forced_decoder_ids` has to be a static argument for jitted functions. The workaround I use is having empty `forced_decoder_ids` and instead passing them into `decoder_input_ids` when I know the forced ids might change.",
"Nice one @andyehrenberg! That must indeed be the root cause of the problem ✅. We'll have to pass the forced decoder ids as static argnums when we `pmap` the generate function in #21764",
"Hi\r\nCould you please specify how to set forced_decoder_ids for the FlaxWhisperForConditionalGeneration object?\r\n@sanchit-gandhi ",
"You should either modify the ` model.generation_config.forced_decoder_ids` or when calling `generate`, set the `language`, `task` and `return_timestamps` arguments. You can also pass them as `decoder_input_ids` (which is also an argument of the `generate()` function or ` forced_decoder_ids`. ",
"> You should either modify the ` model.generation_config.forced_decoder_ids` or when calling `generate`, set the `language`, `task` and `return_timestamps` arguments. You can also pass them as `decoder_input_ids` (which is also an argument of the `generate()` function or ` forced_decoder_ids`.\r\n\r\nIt does not respect the forced_decoder_ids when I pass it to model.generation_config.\r\nThis is my code:\r\n```\r\nmodel = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, from_pt=True)\r\njit_generate = jax.jit(model.generate, static_argnames=[\"max_length\"])\r\nmodel.generation_config.forced_decoder_ids = processor.get_decoder_prompt_ids(language=\"en\", task=\"translate\")\r\npred_ids = jit_generate(input_features, max_length=128)\r\n```\r\n\r\nBut the result is not translation to english (as same as when forced_decoder_ids is set to None)\r\n",
"Also, when I set `language` to `generate()` function, it raises error:\r\n```\r\npred_ids = jit_generate(input_features, max_length=128, language=\"<|en|>\")\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/jax/_src/api_util.py:568, in _str_abstractify(x)\r\n 567 def _str_abstractify(x):\r\n--> 568 raise TypeError(f\"Argument '{x}' of type {type(x)} is not a valid JAX type\")\r\n\r\nTypeError: Argument '<|en|>' of type <class 'str'> is not a valid JAX type\r\n```\r\n\r\nI've also tested `language='en'` and `language='english'` and the result is the same (following error)",
"That is because you are not using the latest version of transformers. All of this was adressed in #21965",
"Your error regarding strings not being a valid JAX type can be fixed by setting the language prior to compiling and keeping it static. It also looks like you’re changing the model’s generation config after wrapping its generate method in jit, which could be causing problems. My guidance is the set your generation parameters how you want, and then get a `partial(model.generate, arg1=val2, …)` and then compile that function (or just use static argnames).",
"It is resolved by passing the `language` parameter in the static_argnames:\r\n```\r\njit_generate = jax.jit(model.generate, static_argnames=[\"max_length\", \"language\"])\r\ninput_features = jnp.array(input_features, dtype=jnp.float16)\r\npred_ids = jit_generate(input_features, max_length=128, language='<|en|>')\r\n```\r\nThanks @andyehrenberg and @ArthurZucker \r\n"
] | 1,677
| 1,678
| 1,677
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.1+cpu (False)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.4 (gpu)
- Jax version: 0.3.25
- JaxLib version: 0.3.25
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi @andyehrenberg @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code snippet:
```python
from transformers import FlaxWhisperForConditionalGeneration, WhisperProcessor
from datasets import load_dataset
model = FlaxWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
librispeech = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = librispeech[0]["audio"]["array"]
input_features = processor(sample, return_tensors="np").input_features
pred_ids = model.generate(input_features)
pred_text = processor.batch_decode(pred_ids.sequences)
print(pred_text)
```
**Print Output:**
```
['<|startoftranscript|>!<|transcribe|><|notimestamps|> Mischekvilder is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']
```
We see an extra `!` after the `<|startoftranscript|>` token that should't be there.
Do you fancy taking a look into this one @andyehrenberg? Otherwise can try and find time next week.
### Expected behavior
Should return:
```
['<|startoftranscript|><|transcribe|><|notimestamps|> Mischekvilder is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21791/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21791/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.