repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
25,306
open
"Dynamic" Issue in LlamaDynamicNTKScalingRotaryEmbedding - Long context inference will impact short context inference.
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.22.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tenso...
08-04-2023 00:31:00
08-04-2023 00:31:00
transformers
25,305
open
Unable to change default cache folders despite setting environment variables
### System Info Collecting environment information... PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could no...
08-03-2023 23:42:20
08-03-2023 23:42:20
transformers
25,304
open
Tokenizer failing to encode chatml correctly
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.14.0-284.18.1.el9_2.x86_64-x86_64-with-glibc2.34 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True)...
08-03-2023 23:13:33
08-03-2023 23:13:33
transformers
25,303
open
loss reduction for `Llama2ForCausalLM.forward`
### Feature request In `forward` method, it outputs `loss` when `labels` are provided. But the `loss` shape is always `(1,)` because `reduction='mean'` in CrossEntropy. I wonder if I could pass `reduction='none'` and get a `(batch_size,)` shaped loss tensor. https://github.com/huggingface/transformers/blob/641adca5...
08-03-2023 21:29:20
08-03-2023 21:29:20
transformers
25,302
closed
Fix typo: Roberta -> RoBERTa
# What does this PR do? Small typo in docs: "Roberta" should have the correct capitalization "RoBERTa". Fixes #25301 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). <!-- - [ ] Did you read the [contributor guideline](https://githu...
08-03-2023 20:04:27
08-03-2023 20:04:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,301
closed
Minor typo referencing RoBERTa
"Roberta" should use the correct capitalization: "RoBERTa" https://github.com/huggingface/transformers/blob/d27e4c18fe2970abcb9a48dcb8a824e48083b15f/docs/source/en/tokenizer_summary.md?plain=1#L144 Should be a simple fix.
08-03-2023 19:58:21
08-03-2023 19:58:21
transformers
25,300
open
Add zero-shot classification task for BLIP-2
### Feature request I would like to add the support for the zero-shot classification task using BLIP2, computing text-image similarities with the normalized embeddings, that would be accessed from BLIP2 feature extractor. The idea is to enable calling the zero-shot classification pipeline using BLIP2, by implement...
08-03-2023 19:53:46
08-03-2023 19:53:46
transformers
25,299
open
cannot import name 'Module' from '_pytest.doctest'
### System Info transformers 4.32.0.dev0 torch 2.1.0.dev20230523+cu117 Error: Traceback (most recent call last): File "/workspace/transformers/examples/pytorch/language-modeling/run_clm.py", line 52, in <module> Traceback (most recent call last): File "/workspace/tran...
08-03-2023 19:05:56
08-03-2023 19:05:56
You might need a `pip install --upgrade pytest`.
transformers
25,298
open
[Whisper] Better error message for outdated generation config
# What does this PR do? Gives a better error message in the case that a user tries using an outdated generation config with the new generation arguments `language` and `task` (as described in https://github.com/huggingface/transformers/issues/25084#issuecomment-1653722724).
08-03-2023 17:57:18
08-03-2023 17:57:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25298). All of your documentation changes will be reflected on that endpoint.
transformers
25,297
open
MaskFormer, Mask2Former - replace einsum for tracing
# What does this PR do? Maskformer cannot currently be traced because of einsum operations. This PR replaces the einsum operations with standard matmuls. With this PR, the following now runs: ```python import torch from transformers import Mask2FormerForUniversalSegmentation device = torch.device("cuda...
08-03-2023 17:48:58
08-03-2023 17:48:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25297). All of your documentation changes will be reflected on that endpoint.
transformers
25,296
open
BertForSequenceClassification does not support 'device_map':"auto" yet
### System Info I have trained a model and am now trying to load and quantise it but getting the error: BertForSequenceClassification does not support 'device_map':"auto" yet Code for loading is simply: ` model = AutoModelForSequenceClassification.from_pretrained(model_dir, device_map='auto', load_in_8bit=T...
08-03-2023 17:00:09
08-03-2023 17:00:09
transformers
25,295
closed
[small] llama2.md typo
# What does this PR do? `groupe` -> `grouped` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. ...
08-03-2023 16:51:06
08-03-2023 16:51:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,294
open
Generate: remove Marian hack
# What does this PR do? WIP, let's see first if all tests pass
08-03-2023 16:48:40
08-03-2023 16:48:40
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25294). All of your documentation changes will be reflected on that endpoint.
transformers
25,293
open
MassFormer
### Model description We propose adding a new model, MassFormer, to predict tandem mass spectra accurately. MassFormer uses a graph transformer architecture to model long-distance relationships between atoms in the molecule. The transformer module is initialized with parameters obtained through a chemical pre-training...
08-03-2023 16:41:42
08-03-2023 16:41:42
transformers
25,292
open
Generate: get generation mode as a string
# What does this PR do? Currently, generate gets several `is_XXX_mode` flags, to determine the generation mode. This was cool when there were a handful of generation modes, but now it means we have many variables. This PR replaces that part of the logic by a single variable -- a string containing the name of the gen...
08-03-2023 16:33:36
08-03-2023 16:33:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25292). All of your documentation changes will be reflected on that endpoint.
transformers
25,291
open
Document check copies
# What does this PR do? This PR document a little bit better how or `Copied from` framework works, adds comments in the actual scripts and rework a bit the test to be better. In passing I added a feature requested which was to make sure `make fix-copies` took the function definition or the superclass into account...
08-03-2023 15:59:52
08-03-2023 15:59:52
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25291). All of your documentation changes will be reflected on that endpoint.
transformers
25,290
open
Make `bark` could have tiny model
# What does this PR do? Make `bark` could have tiny model. This is mainly for #24952 cc @ylacombe
08-03-2023 15:35:40
08-03-2023 15:35:40
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25290). All of your documentation changes will be reflected on that endpoint.
transformers
25,289
open
Quantized models + PEFT + multi-gpu setup failing during training
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.10.178-162.673.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.8 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 ### Who can help? @younesbelkada ### Information - [] The offici...
08-03-2023 15:17:46
08-03-2023 15:17:46
@younesbelkada maybe you can have a look at it?
transformers
25,288
closed
device_map="auto" -> uninitialized parameters
### System Info - `transformers` version: 4.31.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) ### Who can help? @Arthur...
08-03-2023 13:54:40
08-03-2023 13:54:40
I think this should have been fixed by #25101 Could you try again with a source install? (Yes it is a false positive, just tied weights where the copies are not present in the state dict.)<|||||>Awesome, that works. Was afraid that I was messing something up with converting to safetensors. Glad that that is not the ca...
transformers
25,287
open
Transformers Agent suggesting it should use text_generator although it is not provided.
### System Info I am running a version of [your notebook on Transformers Agent](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj), where I have added a cell where I ask the StarCoder agent to generate a sentence for me. I am using StarCoder, as you can see: ``` #@title Agent init agent...
08-03-2023 13:08:51
08-03-2023 13:08:51
I'm not too sure why you are reporting a bug. The agent is an LLM which sometimes hallucinate content (in this case, a tool that does not exist). If your prompt does not work, you should try refining it. You should also try using another model and see if it performs better.
transformers
25,286
closed
[JAX] Bump min version
# What does this PR do? Bumps the minimum version of JAX to [0.4.1](https://jax.readthedocs.io/en/latest/changelog.html#jax-0-4-1-dec-13-2022), the earliest version where the new `jax.Array` API is introduced, replacing the deprecated `jax.numpy.DeviceArray` API. This allows compatibility with the latest JAX version...
08-03-2023 12:53:27
08-03-2023 12:53:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,284
open
Fix Llama's attention map handling for left padding which causes numerical instability and performance drops
Hi this PR is trying to address the performance drop and potential numerical instability caused by vanilla left padding in Llama. Here is the explanation: 1. If we initialize the tokenizer with left padding and call model.generate without passing in corresponding attention_mask, the code will run, but for the instanc...
08-03-2023 12:02:01
08-03-2023 12:02:01
cc @ArthurZucker
transformers
25,283
open
Use of logging.warn is deprecated in favour of logging.warning
There are a few places where `transformers` uses the deprecated `warn` method on a logger, while most of the library uses `warning`. While this works for now, it will presumably be removed at some point (calling it emits a `DeprecationWarning`) and it means that strict test runners (such as `pytest`) complain about som...
08-03-2023 11:38:29
08-03-2023 11:38:29
@PeterJCLaw Indeed! Happy to review a PR :)
transformers
25,282
open
Timm models Safetensor weights give 'NoneType' object has no attribute 'get', weight re-initialization and wrong num_labels
### System Info My env information: ``` - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.31 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.20.3 - Accelerate config: not found - PyTorch version (GPU?)...
08-03-2023 09:20:08
08-03-2023 09:20:08
@sawradip `timm` weights on the hub work in timm, unless I'm missing something (some automatic conversion was added that I'm not aware) I don't think there is any expectation you can load them in `transformers`? I feel the pytorch native weights is a bug that it doesn't crash and it's probably not loading any keys... ...
transformers
25,281
closed
Docs: Update list of `report_to` logging integrations in docstring
# What does this PR do? ## Pull Request overview * Add missing `dagshub`, `codecarbon` and `flyte` integrations to `TrainingArguments` docstring. * Update `report_to` type hint to allow strings. ## Details I also converted the ordering back to alphabetical. I considered using a typing `Literal` as the type...
08-03-2023 08:52:32
08-03-2023 08:52:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,280
open
How to download files from HF spaces
### System Info google colab ### Who can help? @sanchit-gandhi @rock ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproductio...
08-03-2023 07:02:03
08-03-2023 07:02:03
Hi @andysingal, There is a typo in the repo_id. The correct command is: ``` model_path = hf_hub_download(repo_id="xinyu1205/recognize_anything_model", filename="tag2text_swin_14m.pth", local_dir = "/content") ``` If you receive an error that a repo doesn't exist, the best thing to do is check directly on...
transformers
25,279
closed
CI 🚀 even more
# What does this PR do? A follow up of #25274: - To reduce `torch_job` reaches `95%` RAM --> with this PR, it reaches only `82%`. - Also smaller RAM usage for: `tf_job`: `60%` | `flax_job`: `86%` - Avoid the non-modeling files being tested redundantly - we save the timing for ~ 2 x 8 = 16 min. Now, ...
08-03-2023 06:03:20
08-03-2023 06:03:20
Well, request a review too quickly, sorry, but just a few tiny thing to fix ...<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>OK, fair point. At least a (closed) PR is in the history for reference if we ever need it in the future. Thanks!<|||||>(we will need to keep an eye on...
transformers
25,278
open
Llama tokenizer add_prefix_space
Hi @sgugger This PR enables llama tokenizer supporting `add_prefix_space`. Would you please help me review it? Thanks!
08-03-2023 03:36:00
08-03-2023 03:36:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25278). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @sgugger , I have the same request here. My problem is as follows: "\nObservation" is a substring of "!\nObservation", but in the encoded ...
transformers
25,277
open
Unable to quantize Meta's new AudioCraft MusicGen model
### System Info - Windows 11 64bit - Python 3.10.12 - Torch v2.0.1+cu117 - Transformers v4.31.0 - audiocraft v0.0.2 - bitsandbytes v0.41.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `exa...
08-03-2023 00:18:53
08-03-2023 00:18:53
I figured out a fix by adding the line ```python inputs_embeds = inputs_embeds.to(torch.float16) ``` right after line 776, but I noticed commit https://github.com/huggingface/transformers/commit/03f98f96836477f6f5b86957d3ce98778cad5d94 which also fixes this bug. So the second bug is fixed if you're using a version ...
transformers
25,276
open
vectorize PrefixConstrainedLogitsProcessor
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this w...
08-02-2023 20:56:57
08-02-2023 20:56:57
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25276). All of your documentation changes will be reflected on that endpoint.<|||||>There's a silly shape thing happening here which I'll try to debug ASAP (unless others are interested). Unfortunately testing locally is not worki...
transformers
25,275
open
Replace jnp.DeviceArray with jax.Array in FLAX models
## What does this PR do? Recent JAX versions have dropped support for jax.numpy.DeviceArray. Many FLAX models refer to jax.numpy.DeviceArray which causes a crash. This PR replaces all references to jax.numpy.DeviceArray with jax.Array. <!-- Congratulations! You've made it this far! You're not quite done yet thou...
08-02-2023 20:03:56
08-02-2023 20:03:56
Thanks for the fix @akhilgoe - believe this is a duplicate of #24875?<|||||> > Thanks for the fix @akhilgoe - believe this is a duplicate of #24875? Yes correct! <|||||>If it's okay with you can we give @mariecwhite the opportunity to finish their PR since they've worked on it since last week? (should be merged...
transformers
25,274
closed
CI with `pytest_num_workers=8` for torch/tf jobs
We set `pytest_num_workers` to `3` for `torch_job` and 6 for `tf_job` to avoid OOM. With the recent efforts of reducing model size in CI, we can actually set `pytest_num_workers=8`. - The full suite: all 3 jobs (PT/TF/Flax): `12-15 minutes` - On the latest nightly CI (without all PRs merged today): `PT: 37 min | TF...
08-02-2023 19:21:30
08-02-2023 19:21:30
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,273
closed
use `pytest_num_workers=8` for `torch_job` and `tf_job`
# What does this PR do? We set `pytest_num_workers` to `3` for `torch_job` and `6` for `tf_job` to avoid OOM. With the recent efforts of reducing model size in CI, we can actually set `pytest_num_workers=8`. The full suite: all 3 jobs (PT/TF/Flax) 12-15 minutes (on the latest nightly CI without all PRs merged to...
08-02-2023 19:17:59
08-02-2023 19:17:59
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25273). All of your documentation changes will be reflected on that endpoint.
transformers
25,272
closed
Question about generate method for AutoModelForCausalLM
Hi, I am trying to use the git model from the pretrained to pass to captum API for calculation of the attribution score. ` ### Initialize the attribution algorithm from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("microsoft/git-base") ig = IntegratedGradients(model...
08-02-2023 17:08:26
08-02-2023 17:08:26
Hi, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
transformers
25,271
open
EncoderDecoder does not automatically create decoder_attention_mask to match decoder_input_ids
### System Info ``` - `transformers` version: 4.31.0 - Platform: Linux-4.15.0-192-generic-x86_64-with-glibc2.27 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - ...
08-02-2023 14:59:12
08-02-2023 14:59:12
somewhat related, it seems like in the notebook, the `decoder_input_ids` nor the `labels` are shifted; Patrick claims it's because: > `"labels"` are shifted automatically to the left for language modeling training. but I don't see any evidence of this in the implementation. Was this behavior changed at some point? ...
transformers
25,270
open
Device errors when loading in 8 bit
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.31.0 - Platform: Linux-5.10.178-162.673.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.10 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate ver...
08-02-2023 13:39:56
08-02-2023 13:39:56
You cannot re-dispatch a model that was loaded in 8bit. You need to pass along your `max_memory` or `device_map` to the call to `from_pretrained`.
transformers
25,269
open
run_clm_no_trainer.py example - problem with most recent checkpoint loading
The example has code for finding the latest checkpoint, but accelerator.load_state isn't called. https://github.com/huggingface/transformers/blob/1baeed5bdf3c58b723a6125632567f97bdf322c6/examples/pytorch/language-modeling/run_clm_no_trainer.py#L561C15-L561C15
08-02-2023 13:39:33
08-02-2023 13:39:33
Hi @TomerRonen34, thanks for raising this issue! Can you make sure to follow the issue template and include: * A reproducible code snippet * Details of the expected and observed behaviour including the full traceback if it exists * Information about the running environment: run `transformers-cli env` in the ter...
transformers
25,268
closed
recommend DeepSpeed's Argument Parsing documentation
# What does this PR do? Clarify how to properly set the arguments passed by `deepspeed` when running in CLI. For example the following errors might be raised when running something like `deepspeed --num_gpus=2 fine-tune.py google/flan-t5-xxl` due to args passed by `deepspeed`: ``` usage: fine-tune.py [-h] mod...
08-02-2023 13:32:15
08-02-2023 13:32:15
cc @pacman100 <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
25,267
closed
[MMS] Fix mms
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this w...
08-02-2023 13:26:07
08-02-2023 13:26:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh ok to merge or should we run some more tests?<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25267). All of your documentation changes will be reflected on that endpoint.
transformers
25,266
closed
CI with layers=2
# What does this PR do? Running a (sub) set of 24315 tests (given by test fetcher) - only tests in `test_modeling_xxx.py`. (for a full run like nightly run, it doesn't seem change anything about running time - need more investigation) Running time: - num_layers = mixed (2, 3, 4, 5, 6) - currently `main` - ...
08-02-2023 13:08:37
08-02-2023 13:08:37
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,265
open
[`Docs` / `BetterTransformer` ] Added more details about flash attention + SDPA
# What does this PR do? as discussed offline with @LysandreJik This PR clarifies to users how it is possible to use Flash Attention as a backend for most used models in transformers. As we have a seen some questions from users asking whether it is possible to integrate flash attention into HF models, whereas you...
08-02-2023 12:59:23
08-02-2023 12:59:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25265). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks a lot for the extensive review @stevhliu ! 🎉
transformers
25,264
open
[Question] How to load AutoFeatureExtractor on GPU?
Hi, I am following this guide to learn how to do audio classification with wav2vec2: https://huggingface.co/docs/transformers/main/tasks/audio_classification I intend to extract features of my data with the following codes ``` feature_extractor = AutoFeatureExtractor.from_pretrained("/workspace/models/wav2vec2-lar...
08-02-2023 12:26:20
08-02-2023 12:26:20
Hi @treya-lin, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports. You can move arrays prepared by the feature extractor to the GPU using the `to` method on its outputs: ``` de...
transformers
25,263
closed
Remove `pytest_options={"rA": None}` in CI
# What does this PR do? This option causes the (TF/Flax) jobs to spend 6-8 minutes (for a full set run) to prepare something for reporting after the actual tests are finished. Taking [this TF job (nightly run)](https://app.circleci.com/pipelines/github/huggingface/transformers/69562/workflows/8fd9db08-9730-4d57-9...
08-02-2023 11:36:03
08-02-2023 11:36:03
_The documentation is not available anymore as the PR was closed or merged._<|||||> > For reference, I think `-rA` generates a [detailed summary report for all groups](https://docs.pytest.org/en/6.2.x/usage.html#detailed-summary-report). Oh yes, my memory mixed the `--make-reports` and `-rA` things. Thanks! <|||||...
transformers
25,262
open
model.push_to_hub not working for gtr-large while loading with 8-bit using bnb
### System Info Issue :- I want to load gtr-large model in 8-bits using bitsandbytes and save it for future usage model = T5ForConditionalGeneration.from_pretrained('sentence-transformers/gtr-t5-large',load_in_8bit=True) model.push_to_hub("snigdhachandan/gtr_large_8bit") Error :- Traceback (most recen...
08-02-2023 11:18:38
08-02-2023 11:18:38
Hi @nss-programmer, thanks for raising this issue. There's been quite a few updates between bitsandbytes and transformers recently. Could you update your local transformers version to the most recent release `pip install --upgrade transformers` and try again? If that doesn't work, then could you try from source `pi...
transformers
25,261
open
Mask2Former broadcasting issue when running inference on model traced with GPU device
### System Info ``` - System information: x86_64 GNU/Linux - Ubuntu version: 18.04 - Python version: 3.8.12 - CUDA version: 11.1 - PyTorch version: 2.0.1 - transformers version: 4.31.0 ``` ### Who can help? @amyeroberts @sgugger @muellerzr ### Information - [ ] The official example scripts - [ ] My own...
08-02-2023 11:06:50
08-02-2023 11:06:50
Hi @matteot11, thanks for reporting this and for providing such a detailed and clean issue report ❤️ Looking into it 🔍 <|||||>@matteot11 I'm going to open up a PR soon to resolve this and remove the einsum operations. In the meantime, if you need to be able to run a compiled model now, it will run on torch nightly...
transformers
25,260
closed
⚠️ [Wav2Vec2-MMS] `pipeline` and `from_pretrained` fail to load the Wav2Vec2 MMS checkpoints
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (False) - Tensor...
08-02-2023 10:22:16
08-02-2023 10:22:16
cc @patrickvonplaten <|||||>It looks like it's related to some recent changes and accelerate. If you checkout this commit: https://github.com/huggingface/transformers/commit/b0513b013b10939a2b47ab94933c2cca909716a2 and uninstall accelerate the code snippet works fine for me.<|||||>IIRC, fast loading with acceler...
transformers
25,259
closed
Update rescale tests - cast to float after rescaling to reflect #25229
# What does this PR do? In #25229 - the casting to float was moved back to after rescaling. This wasn't reflected in the specific rescaling tests for EfficientNet and ViVit, resulting in failing tests. This PR resolves this. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dis...
08-02-2023 10:01:18
08-02-2023 10:01:18
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,258
open
Why I cannot assign new parameter to the whisper pretrained config?
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) -...
08-02-2023 09:29:35
08-02-2023 09:29:35
Hi @teinhonglo, thanks for raising this issue! The reason for not being able to assign through the `from_pretrained` call is a safety check. Unknown kwargs are not applied: their application is ambigious - should they control the `from_pretrained` behaviour or be set as a config attribute? You can see which kwargs ...
transformers
25,257
open
how to print out the data loaded by each epoch during trainer.train() training?
### Feature request please tell to me, how to print out the data loaded by each epoch during trainer.train() training? ### Motivation how to print out the data loaded by each epoch during trainer.train() training? ### Your contribution how to print out the data loaded by each epoch during trainer.train() train...
08-02-2023 09:13:55
08-02-2023 09:13:55
Hi @ahong007007, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
transformers
25,256
open
Use 'transformers.BertModel.from_pretrained', The code is blocked
![52ae2d1edf2fa3044e6932d42c558f1](https://github.com/huggingface/transformers/assets/86940083/180c1033-375a-46b8-af7e-cda344e1e5ff) this is py-spy result: ![image](https://github.com/huggingface/transformers/assets/86940083/5d5aa094-fa16-452d-ab39-8700fa4d8d1e)
08-02-2023 08:56:36
08-02-2023 08:56:36
Hi, are you running the script/command in some particular setting? Looks like it's in a multiprocessing setting? Could you provide a self-complete code snippet instead of just uploading screenshot? Thanks in advance.<|||||>if not use pyrocketmq is ok. but use pyrocketmq not ok. the code is: ``` import jpype.impo...
transformers
25,255
open
fix bad URL to Llama 2
# What does this PR do? ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
08-02-2023 08:43:23
08-02-2023 08:43:23
@fangli80 Running`make fix-copies` and pushing the changes will resolve the failing quality CI checks
transformers
25,254
open
Add FlaxCLIPTextModelWithProjection
# What does this PR do? `FlaxCLIPTextModelWithProjection` is necessary to support the Flax port of Stable Diffusion XL: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/fb6d705fb518524cabc79c77f13a0e7921bcab3a/text_encoder_2/config.json#L3 I can add some tests, if necessary, after this appr...
08-02-2023 08:25:27
08-02-2023 08:25:27
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25254). All of your documentation changes will be reflected on that endpoint.<|||||>Should we maybe for now just add it in a subfolder of sdxl in diffusers here: https://github.com/huggingface/diffusers/tree/main/src/diffusers/pip...
transformers
25,253
open
RWKV-WORLD-4
### Model description BlinkDL/rwkv-4-world is a repo present on Huggingface i want the model's tokenizer and the model to be added to the Transformers Lib. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No r...
08-02-2023 07:39:58
08-02-2023 07:39:58
Hi @CosmoLM, thanks for opening this model request! The RWKV-4 model already exists in transformers -- [PR](https://github.com/huggingface/transformers/pull/22797), [docs](https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/rwkv#rwkv-attention-and-the-recurrent-formulas). To enable loading the model throu...
transformers
25,252
open
run_mae.py can not be used directly on own dir
### System Info ref: https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining python run_mae.py \ --model_type vit_mae \ --dataset_name nateraw/image-folder \ --train_dir <path-to-train-root> \ --output_dir ./outputs/ \ --remove_unused_columns False \ --...
08-02-2023 07:30:25
08-02-2023 07:30:25
The error > FileNotFoundError: Unable to find '/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/train/' at / shows you don't have local datasets (or there is some issue to locate it). Could you verify this on your own side? Thanks.<|||||>Hi @CheungZeeCn, thanks for raising this issue! So that we can bes...
transformers
25,251
open
Defining top_k within pipeline changes output from list to nested list
### System Info ``` - `transformers` version: 4.30.2 - Platform: Linux-5.14.0-162.22.2.el9_1.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Fla...
08-02-2023 05:12:29
08-02-2023 05:12:29
Hi @Harjas123 thank you for reporting! Our team will take a look.<|||||>also cc @Narsil <|||||>I agree that this is inconsistent but I don't think there is much to do about it now since this has been the case for the past three years, and making any change would break a lot of users code.<|||||>I understand. Would it a...
transformers
25,250
open
Ko perf train gpu one
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으로 부탁드립니다! --> # What does this PR do? Translated the `<your_file>.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [ ] Chec...
08-02-2023 03:43:28
08-02-2023 03:43:28
transformers
25,249
closed
Bump cryptography from 41.0.2 to 41.0.3 in /examples/research_projects/decision_transformer
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.2 to 41.0.3. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p> <blockquote> <p>41.0.3 - 2023-08-01</p> <pre><code> * Fixed performan...
08-02-2023 02:22:03
08-02-2023 02:22:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major vers...
transformers
25,248
open
Allow `trust_remote_code` in example scripts
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this w...
08-01-2023 20:31:51
08-01-2023 20:31:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25248). All of your documentation changes will be reflected on that endpoint.<|||||>Will do flax and tf tomorrow. I have a few questions though: 1. @ydshieh, this script is still using `use_auth_token`. Is this intended? https:...
transformers
25,247
open
Enable use of best epoch in Trial, with early stopping, during hyperparameter search
### Feature request When running a `Trainer.hyperparameter_search`, each trial's value is calculated from the last epoch's chosen metric. However, especially when using early stopping and `load_best_model_at_end`, it would be useful to use the best model instead. This could be a parameter of `Trainer.hyperparameter...
08-01-2023 19:36:07
08-01-2023 19:36:07
cc @sgugger <|||||>Yes this is not currently supported. Could be nice to add, but this is not high-priority on our side, so it would have to be a contribution :-) Happy to review a PR!
transformers
25,246
closed
Fix return_dict_in_generate bug in InstructBlip generate function
# What does this PR do? Previously, the postprocessing conducted on generated sequences in InstructBlip's generate function assumed these sequences were tensors (i.e. that `return_dict_in_generate == False`). This PR updates the InstructBlip generate function to check whether the result of the call to the wrapped...
08-01-2023 18:28:04
08-01-2023 18:28:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,245
open
BLIP-2 request: If it's even possible, can you please provide an official example script of how to get the text(caption) features and image features into the same vector space (e.g. for cross-modal retrieval/search using BLIP-2 models, similar to what we can already do with CLIP.) Thanks in advance.
### System Info linux, python 3.8+, pytorch '1.13.0+cu116' ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ...
08-01-2023 18:21:07
08-01-2023 18:21:07
Hi @wingz1, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports. There are code examples of how to use [BLIP](https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/blip#trans...
transformers
25,244
open
VQA task guide
This PR adds a new Visual Question Answering task guide to the transformers docs: fine-tuning ViLT, based on @NielsRogge 's [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/ViLT/Fine_tuning_ViLT_for_VQA.ipynb)
08-01-2023 17:57:58
08-01-2023 17:57:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25244). All of your documentation changes will be reflected on that endpoint.
transformers
25,243
closed
RetNet model support
### Model description RetNet / Retentive Networks is a new model *archetype* released by microsoft; the research paper is [here](https://arxiv.org/pdf/2307.08621.pdf). As of now, there is *one* model for retnet; [made by me](https://huggingface.co/parsee-mizuhashi/retnet-tiny-wikitext-undertrained); which is undertrai...
08-01-2023 17:35:07
08-01-2023 17:35:07
cc @ArthurZucker @younesbelkada <|||||>p.s. if google offered any bigger TPU's for TRC; i could train retnet-3b (the point at which retnet is better than regular transformers), but as of now; theres retnet_base (small) and retnet_medium (ill upload it when it gets good)<|||||>I am wondering if the original authors rele...
transformers
25,242
open
WIP In assisted decoding, pass model_kwargs to model's forward call (fix prepare_input_for_generation in all models)
# What does this PR do? Previously, assisted decoding would ignore any additional kwargs that it doesn't explicitly handle. This was inconsistent with other generation methods, which pass the model_kwargs through prepare_inputs_for_generation and forward the returned dict to the model's forward call. The prepare_...
08-01-2023 16:05:14
08-01-2023 16:05:14
@sinking-point the PR has "WIP" in the title -- is it still under development, or is it ready to review?<|||||>Not ready yet. Still have to fix more models and see what's breaking the other test. I've deprioritised this somewhat as it's quite time consuming, but I'll keep chipping away at it whenever I can. If you nee...
transformers
25,241
open
Bug in `PreTrainedModel.resize_token_embeddings` When Using DeepSpeed Zero Stage 3
### System Info transformers version: 4.31.0 Platform: Linux 5.4.238-148.346.amzn2.x86_64 Python version: 3.8.10 Huggingface_hub version: 0.14.1 Safetensors version: 0.3.1 PyTorch version (GPU?): 2.0.1+cu117 (True) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) ...
08-01-2023 16:04:40
08-01-2023 16:04:40
Hi! Would it possible for you to do `resize_token_embeddings` without DeepSpeed, save the model, and load the new model in the script where you use DeepSpeed. This might be easier and quicker in terms of solution/workaround (if it works).<|||||>Hi, thanks for the suggestion. I have RCed this and have a nonhacky solu...
transformers
25,240
open
Docs: introduction to the generate API
# What does this PR do? This PR adds a sort of landing page on `generate`, which was missing in our docs. This page is useful for beginners and experienced users alike -- it goes through the basic generate API for both LLMs and non-text tasks, common caveats, and ends with pointers for advanced exploration. I ex...
08-01-2023 15:59:03
08-01-2023 15:59:03
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25240). All of your documentation changes will be reflected on that endpoint.<|||||>Do we really want to include non-text parts so prominently here? I think 99% of the users clicking on "Generation" expect to see only text generat...
transformers
25,239
closed
Fix set of model parallel in the Trainer when no GPUs are available
# What does this PR do? Fixes how `self.is_model_parallel` is set in the Trainer when no GPUs are available. Fixes #25236
08-01-2023 14:56:35
08-01-2023 14:56:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,238
open
TF-OPT attention mask fixes
With apologies for the delay, this PR should hopefully resolve the issues in #24637. @abb128 can you please try installing from this PR and verify if it resolves your issues? You can install from this PR with: `pip install --upgrade git+https://github.com/huggingface/transformers.git@tf_opt_fixes` Fixes #24637
08-01-2023 14:50:27
08-01-2023 14:50:27
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25238). All of your documentation changes will be reflected on that endpoint.
transformers
25,237
open
Deal with nested configs better in base class
# What does this PR do? This PR removes the need to override `to_dict` in model configs by implementing the whole logic in the base class. It also deals better with `to_diff_dict` for those configs, by analyzing the dict of sub-configs key by key and not as a whole. This also removes the `is_composition` flag from c...
08-01-2023 14:42:20
08-01-2023 14:42:20
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25237). All of your documentation changes will be reflected on that endpoint.<|||||>@ArthurZucker the `is_composition=True` is not necessary anymore except for configs which have no default for their subconfigs. And it should only...
transformers
25,236
closed
Fails to create Trainer object. IndexError: list index out of range at --> torch.device(devices[0]);
### System Info The system is google colab, transformers related packages are installed from git. ``` - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.22.0.dev...
08-01-2023 14:37:03
08-01-2023 14:37:03
Same issue as: https://discuss.huggingface.co/t/indexerror-on-devices-0-when-initializing-a-trainer/46410<|||||>I can fix that particular issue but you won't be able to actually train a model with CPU/disk offload, only do evaluation.<|||||>I figured out in my case removing `os.environ["CUDA_VISIBLE_DEVICES"]="0"` se...
transformers
25,235
closed
Docs: separate generate section
# What does this PR do? A conclusion of the latest doc brainstorming section with @patrickvonplaten was that generate-related doc discoverability will become harder as we add more guides. The plan would envision a tutorial page and a few new developer guides -- in addition to the existing task pages, developer guide...
08-01-2023 14:35:54
08-01-2023 14:35:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,234
closed
Update bark doc
# What does this PR do? Bark can be greatly optimized with a few lines of code, which is discussed and explained in more detail in this [blog post](https://github.com/huggingface/blog/pull/1353). To encourage adoption and promote the use of optimization, I've added a few lines to the Bark documentation to reflect th...
08-01-2023 12:53:50
08-01-2023 12:53:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @MKhalusova and @sanchit-gandhi , I've updated the docs according to your comments! Thanks for the review!<|||||>Thanks @ylacombe for the recent round of changes!
transformers
25,233
closed
add generate method to SpeechT5ForTextToSpeech
# What does this PR do? This simple PR aims at adding a `generate` method to `SpeechT5ForTextToSpeech`, which does exactly the same than `generate_speech`. `generate_speech` was left for backward compatibility. The goal is to make `SpeechT5ForTextToSpeech` compatible with the [incoming TTS pipeline](https://g...
08-01-2023 11:39:29
08-01-2023 11:39:29
cc @gante as well<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sanchit-gandhi and @sgugger , thanks for the review! I would like to add `SpeechT5ForTextToSpeechWithHiFiGAN` in another PR if that's ok with you, since it requires additional tests, and since the changes ...
transformers
25,232
open
AddedToken problems in LlamaTokenizer
### System Info - `transformers` version: 4.31.0 - Platform: macOS-13.5-x86_64-i386-64bit - Python version: 3.9.5 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): no...
08-01-2023 11:06:29
08-01-2023 11:06:29
This is part of the `stripping` issue mentionned on the PR. As you can see the following works as expected: ```python >>> dd = {"additional_special_tokens": [AddedToken("<bot>", rstrip = False)]} >>> tokenizer2.add_special_tokens(dd) >>> t1 = tokenizer1.tokenize(txt) >>> t2 = tokenizer2.tokenize(txt) >>> pri...
transformers
25,231
open
Seq2SeqTrainer.evaluate and predict don't yield the right number of predictions when num_return_sequences > 1
### System Info transformers: 4.31.0 accelerate: 0.21.0 python: 2.11.3 env: macOS 13.4.1 ### Who can help? @gante, I think, because this is related with generation ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in th...
08-01-2023 10:11:11
08-01-2023 10:11:11
It looks more like something in `accelerate`, so cc @muellerzr . But @antonioalegria > . It drops num_return_sequences - 1 sequences in the last batch Could you explain a bit more about this number? It doesn't seem corresponding to what you showed in the code snippet ..?<|||||>Apologies for not being clear. ...
transformers
25,230
closed
[`Detr`] Fix detr BatchNorm replacement issue
# What does this PR do? Fixes the current failing CI on #25077 / related failing jobs: https://app.circleci.com/pipelines/github/huggingface/transformers/69452/workflows/999f3686-2d9a-4324-bed6-1c858f4d8246/jobs/871127 In #25077 I decided to [add a property method `current_adapter`](https://github.com/younesbelk...
08-01-2023 09:50:13
08-01-2023 09:50:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25230). All of your documentation changes will be reflected on that endpoint.
transformers
25,229
closed
Move rescale dtype recasting to match torchvision ToTensor
# What does this PR do? The dtype casting of the input image when rescaling was moved in #25174 so that precision was kept when rescaling if desired. However, this broke equivalence tests with torchvision's `ToTensor` transform c.f. [this comment](https://github.com/huggingface/transformers/pull/24796#issuecomment-1...
08-01-2023 09:35:31
08-01-2023 09:35:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you very much, Amy!
transformers
25,228
closed
chatglm2 load_in_8bit=true can't reduce gpu memory when using transformer==4.31.0
### System Info - `transformers` version: 4.31.0 - Platform: Linux-3.10.0-1160.92.1.el7.x86_64-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.0+cu117 (True)...
08-01-2023 09:33:57
08-01-2023 09:33:57
ref:https://github.com/THUDM/ChatGLM2-6B/issues/163<|||||>cc @younesbelkada <|||||>+1<|||||>Thanks, my feeling is that it is related with the issue described in https://github.com/huggingface/transformers/pull/25105 Can you try that version of transformers meanwhile and let me know if this fixes your issue? ```bas...
transformers
25,227
closed
resolving zero3 init when using accelerate config with Trainer
# What does this PR do? 1. Fixes https://github.com/huggingface/accelerate/issues/1801
08-01-2023 08:55:52
08-01-2023 08:55:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,226
open
Add offline mode for agents
# What does this PR do? This PR adds a check in the remote tools setup to bypass it when Transformers is in offline mode. Fixes #25223
08-01-2023 08:46:37
08-01-2023 08:46:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25226). All of your documentation changes will be reflected on that endpoint.<|||||>I'm getting an error: ``` ValueError: image-transformation is not implemented on the Hub. ``` It's coming from ```_setup_default_tools``` ...
transformers
25,225
closed
[Bis] Adding new tokens while preserving tokenization of adjacent tokens
### System Info * `transformers` version: 4.31 * Platform: Linux [...] 5.19.0-50-generic 50-Ubuntu x86_64 GNU/Linux * Python version: 3.10.12 * Huggingface_hub version: 0.16.4 * PyTorch version (GPU?): 2.0.1+cu118 (True) * Using GPU in script?: No * Using distributed or parallel set-up in script?: No ### W...
08-01-2023 08:29:56
08-01-2023 08:29:56
Hey! This has already been answered, and is a duplicate of #14770. Will be fixed by #23909.
transformers
25,224
open
🚨🚨🚨 [`SPM`] Finish fix spm models 🚨🚨🚨
# What does this PR do? Modifies `Llama` and `T5` other sentencepiece based tokenizer will follow. Previous behaviour is always possible with ` tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b", legacy = True)` ## The goal of `transformers`'s wrapping around `sentencepiece` To clarify, we want to: ...
08-01-2023 07:29:22
08-01-2023 07:29:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25224). All of your documentation changes will be reflected on that endpoint.<|||||>Will fix the prefixing of special tokens!
transformers
25,223
open
Agent trying to load remote tools when being offline
### System Info Transformers 4.31 Python 3.11.4 Windows 10 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or datase...
08-01-2023 07:26:02
08-01-2023 07:26:02
Hi @Romainlg29 Could you provide a complete code snippet instead of definitions like `model = ...`. Thanks in advance!<|||||>> Hi @Romainlg29 > > Could you provide a complete code snippet instead of definitions like `model = ...`. Thanks in advance! Hi, It's the following. ``` import os os.environ['TR...
transformers
25,222
closed
config.json file not available
### System Info colab notebook: https://colab.research.google.com/drive/118RTcKAQFIICDsgTcabIF-_XKmOgM-cc?usp=sharing ### Who can help? @ArthurZucker @youn ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (...
08-01-2023 07:10:05
08-01-2023 07:10:05
The error on the shared colab is ```python OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and p...
transformers
25,221
closed
[BUG REPORT] inconsistent inference results between batch of samples and a single sample in BLIP / BLIP2
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.0-1041-azure-x86_64-with-glibc2.31 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TP...
08-01-2023 05:04:24
08-01-2023 05:04:24
cc @younesbelkada , but @xk-huang Could you first try all the suggestions in [Reproducibility](https://pytorch.org/docs/stable/notes/randomness.html) 🙏 Thanks a lot. Also ``` # `False` is already the default torch.backends.cuda.matmul.allow_tf32 = False # The flag below controls whether to allow TF32 on cu...
transformers
25,220
open
OASST model is unavailable for Transformer Agent: `'inputs' must have less than 1024 tokens.`
### System Info - transformers version: 4.29.0 - huggingface_hub version: 0.16.4 - python version: 3.10.6 - OS: Ubuntu 22.04.2 LTS * run on Google Colab using [the provided notebook](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj?usp=sharing). * [my notebook](https://colab.research.goo...
08-01-2023 02:53:41
08-01-2023 02:53:41
Hi there. We temporarily increased the max length for this endpoint when releasing the Agents framework, but it's not back to its normal value. So yes, this one won't work anymore.<|||||>Thank you for the info, @sgugger! > So yes, this one won't work anymore. Then other OpenAssisant models may also only work with...
transformers
25,219
open
Trainer.model.push_to_hub() should allow private repository flag
### Feature request Trainer.model.push_to_hub() should allow a push to a private repository, as opposed to just pushing to a public and having to private it after. ### Motivation I get frustrated having to private my repositories instead of being able to upload models by default to a private repo programmatically. ...
07-31-2023 22:35:36
07-31-2023 22:35:36
Hi @arikanev, thanks for raising this issue. In `TrainingArguments` you can set [hub_private_repo to `True`](https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_private_repo) to control this. <|||||>Thanks for the heads up! Time saver :) <|||||>Please note, I ...
transformers
25,218
closed
inject automatic end of utterance tokens
This adds a new feature: For select models add `<end_of_utterance>` token at the end of each utterance. The user can now easily break up their prompt and not need to worry about messing with tokens. So for this prompt: ``` [ "User:", image, "Describe this image.", "A...
07-31-2023 22:13:10
07-31-2023 22:13:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25218). All of your documentation changes will be reflected on that endpoint.
transformers
25,217
open
Scoring translations is unacceptably slow
### System Info - `transformers` version: 4.29.0 - Platform: Linux-3.10.0-862.11.6.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.16 - Huggingface_hub version: 0.12.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax versio...
07-31-2023 18:34:34
07-31-2023 18:34:34
cc @gante <|||||>Hey @erip 👋 Sadly, I'm out of bandwidth to dive into the performance of very specific generation modes (in this case, beam search with `PrefixConstrainedLogitsProcessor`). If you'd like to explore the issue and pinpoint the cause of the performance issue, I may be able to help, depending on the co...
transformers
25,216
closed
[`Docs`/`quantization`] Clearer explanation on how things works under the hood. + remove outdated info
# What does this PR do? As discussed internally with @amyeroberts , this PR makes things clearer to users on how things work under the hood for quantized models. Before this PR it was not clear to users how the other modules (non `torch.nn.Linear`) were treated under the hood when quantizing a model. cc @amyerob...
07-31-2023 17:50:07
07-31-2023 17:50:07
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,215
open
config.json file not available
### System Info colab notebook: https://colab.research.google.com/drive/118RTcKAQFIICDsgTcabIF-_XKmOgM-cc?usp=sharing ### Who can help? @sgugger @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder...
07-31-2023 17:34:46
07-31-2023 17:34:46
Hi @andysingal it seems you are trying to load an adapter model. You can load it with ```python from peft import AutoPeftModelForCausalLM model = AutoPeftModelForCausalLM.from_pretrained("Andyrasika/qlora-2-7b-andy") ``` If you want to load the base model in 4bit: ```python from peft import AutoPeftMod...
transformers
25,214
closed
Fix docker image build failure
# What does this PR do? We again get not enough disk size error on docker image build CI. I should try to learn some ways to reduce the size and avoid this error, but this PR fixes this situation in a quick way: install torch/tensorflow before running `pip install .[dev]`, so they are only install once, and we have ...
07-31-2023 16:09:53
07-31-2023 16:09:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,213
closed
Update tiny model info. and pipeline testing
# What does this PR do? Just a regular update.
07-31-2023 15:35:17
07-31-2023 15:35:17
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25213). All of your documentation changes will be reflected on that endpoint.
transformers
25,212
closed
MinNewTokensLengthLogitsProcessor
null
07-31-2023 14:31:01
07-31-2023 14:31:01
transformers
25,211
closed
Fix `all_model_classes` in `FlaxBloomGenerationTest`
# What does this PR do? It should be a tuple (which requires the ending `,`)
07-31-2023 14:20:49
07-31-2023 14:20:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,210
closed
importlib.metadata.PackageNotFoundError: bitsandbytes
### System Info `transformers` version: 4.32.0.dev0 - Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.27 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Te...
07-31-2023 14:20:49
07-31-2023 14:20:49
Hi @looperEit, thanks for reporting this issue! Could you share the installed version of bitsandbytes and how you installed it? cc @younesbelkada <|||||>i used the `pip install -r *requriment.txt"`,and the txt file like: ![image](https://github.com/huggingface/transformers/assets/46367388/e6bb0a94-6a32-48f2-8a...
transformers
25,209
closed
Update InstructBLIP & Align values after rescale update
# What does this PR do? After #25174 the integration tests for Align and InstructBLIP fail. ### InstructBLIP The difference in the output logits is small. Additionally, when debugging to check the differences and resolve the failing tests, it was noticed that the InstructBLIP tests are not independent. Runnin...
07-31-2023 13:08:06
07-31-2023 13:08:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>Agreed with your plan!<|||||>I also prefer 2., but I am a bit confused > Update rescale and ViVit config So this only changes `ViVit` config and its `rescale`. And Align uses `EfficientNet` image processor. So when we change somet...
transformers
25,208
open
Getting error while implementing Falcon-7B model: AttributeError: module 'signal' has no attribute 'SIGALRM'
### System Info ![Screenshot 2023-07-31 134702](https://github.com/huggingface/transformers/assets/83700281/7282ae2e-ca4f-4d87-9968-57b00fdae1f0) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the ...
07-31-2023 12:47:32
07-31-2023 12:47:32
Hey @amitkedia007 ! I'm suspecting you are using Windows? Have you tried [this](https://huggingface.co/tiiuae/falcon-7b-instruct/discussions/57)? Maybe adding `trust_remote_code = True` to `tokenizer = AutoTokenizer.from_pretrained(model_name)` in order to allow downloading the appropriate tokenizer would work. Pl...
transformers
25,207
closed
[`pipeline`] revisit device check for pipeline
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/23336#issuecomment-1657792271 Currently `.to` is called to the model in pipeline even if the model is loaded with accelerate - which is a bad practice and can lead to unexpected behaviour if the model is loaded across multiple GPUs ...
07-31-2023 11:32:39
07-31-2023 11:32:39
After thinking about it, maybe this shouldn't be the right fix, it is a bad intent from users to add a `device_map` + `device` argument. Let me know what do you think<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Yeah let's raise an error!
transformers
25,206
closed
[`PreTrainedModel`] Wrap `cuda` and `to` method correctly
# What does this PR do? As discussed internally with @sgugger Use `functools.wrap` to wrap the `to` and `cuda` methods to preserve their original signature, for example the script below: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("faceb...
07-31-2023 10:51:36
07-31-2023 10:51:36
_The documentation is not available anymore as the PR was closed or merged._