url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/19482
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19482/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19482/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19482/events
|
https://github.com/huggingface/transformers/pull/19482
| 1,404,127,173
|
PR_kwDOCUB6oc5Ai0Fd
| 19,482
|
Fix whisper for `pipeline`
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Before we merge here, let's try to have the following tests working:\r\n\r\n- Automatic pipeline tests for dummy whisper model (as discussed offline) \r\n- 2 slow pipeline tests (one for speech recognition, one for speech translation here: [https://github.com/huggingface/transformers/blob/10100979ed0594d4cfe1982cdfac9642a[β¦]/tests/pipelines/test_pipelines_automatic_speech_recognition.py](https://github.com/huggingface/transformers/blob/10100979ed0594d4cfe1982cdfac9642a68e473e/tests/pipelines/test_pipelines_automatic_speech_recognition.py#L54)",
"We were missing a `_CHECKPOINT_FOR_DOC`, so I added a warning when the tests are skipped. \r\nIt seems a little bit problematic as if it is unused, `quality` will fail (and in our case, I had to change the code to use it π ) \r\nOther models that are not tested : \r\n`SpeechEncoderDecoderModel, Speech2TextForConditionalGeneration`, which are also just missing the `_CHECKPOINT_FOR_DOC`(see [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L41) )"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
After the merge of #19378 , the feature extractor does not work with the `pipeline` function. This PR is the same as #19385.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19482/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19482",
"html_url": "https://github.com/huggingface/transformers/pull/19482",
"diff_url": "https://github.com/huggingface/transformers/pull/19482.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19482.patch",
"merged_at": 1665487074000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19481
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19481/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19481/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19481/events
|
https://github.com/huggingface/transformers/pull/19481
| 1,404,048,420
|
PR_kwDOCUB6oc5AijbV
| 19,481
|
Making Lxmert Tokenizer independent from bert Tokenizer
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #19303
@sgugger can you review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19481/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19481",
"html_url": "https://github.com/huggingface/transformers/pull/19481",
"diff_url": "https://github.com/huggingface/transformers/pull/19481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19481.patch",
"merged_at": 1665510385000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19480
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19480/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19480/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19480/events
|
https://github.com/huggingface/transformers/issues/19480
| 1,404,013,987
|
I_kwDOCUB6oc5Tr42j
| 19,480
|
[INT8] BLOOM series model loading back issue
|
{
"login": "lanking520",
"id": 11890922,
"node_id": "MDQ6VXNlcjExODkwOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/11890922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lanking520",
"html_url": "https://github.com/lanking520",
"followers_url": "https://api.github.com/users/lanking520/followers",
"following_url": "https://api.github.com/users/lanking520/following{/other_user}",
"gists_url": "https://api.github.com/users/lanking520/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lanking520/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lanking520/subscriptions",
"organizations_url": "https://api.github.com/users/lanking520/orgs",
"repos_url": "https://api.github.com/users/lanking520/repos",
"events_url": "https://api.github.com/users/lanking520/events{/privacy}",
"received_events_url": "https://api.github.com/users/lanking520/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Maybe @sgugger has some insights",
"It's hard to know which part fails without the whole traceback. I suspect it's when we set the default dtype to `torch_dtype`, which only works for floating dtypes. If that's the case, there is a probably a workaround possible by only setting the default dtype when the `torch_dtype` passed is a floating type.\r\n\r\nAlso cc @younesbelkada since it's related to int8 format.",
"Hey @lanking520 ! \r\nThanks for your issue πͺ \r\nLet's try to debug this step by step. I suggest first to make your script run on the model `bigscience/bigscience-small-testing` - when running your script I got incorrect max_memory maps, so I had to overwrite the `max_memory` dict with the following `max_memory={0:\"10GB\", 1:\"10GB\"},` (I am running my tests on 2x NVIDIA T4).\r\nAfter that, the loading script gives me the following error:\r\n```\r\n\r\nβ /home/younes_huggingface_co/debug_issues/code/transformers/src/transformers/modeling_utils.py:10 β\r\nβ 49 in _set_default_torch_dtype β\r\nβ β\r\nβ 1046 β β `torch.int64` is passed. So if a non-float `dtype` is passed this functions will β\r\nβ 1047 β β \"\"\" β\r\nβ 1048 β β if not dtype.is_floating_point: β\r\nβ β± 1049 β β β raise ValueError( β\r\nβ 1050 β β β β f\"Can't instantiate {cls.__name__} model under dtype={dtype} since it is β\r\nβ 1051 β β β ) β\r\nβ 1052 β\r\nβ°βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―\r\nValueError: Can't instantiate BloomForCausalLM model under dtype=torch.int8 since it is not a floating point dtype\r\n```\r\nThe \"hack\" as @sgugger suggested is to \"force-load\" the weights in a floating point format - if you run:\r\n```\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=\"auto\", device_map=\"auto\")\r\n```\r\nYou can check that the weights are indeed `int8` weights but casted in half-precision\r\n```\r\n>>> model.transformer.h[0].mlp.dense_h_to_4h.weight\r\nParameter containing:\r\ntensor([[ 96., -90., -9., ..., 36., -25., -25.],\r\n [ 0., -11., -51., ..., 9., 20., 38.],\r\n [ -8., 8., 2., ..., 36., -88., -12.],\r\n ...,\r\n [ -6., 33., -41., ..., -32., -18., -45.],\r\n [ -11., -43., -34., ..., -14., -1., -50.],\r\n [ -42., 44., 108., ..., 80., -119., 54.]], dtype=torch.float16,\r\n requires_grad=True)\r\n```\r\nHowever, please note that even if you manage to run an inference with these weights, you will not be able to retrieve the same accuracy / performance than the 8-bit model that is created by `load_in_8bit=True` from the `fp16` model. This is because of how the `Linear8bitLt` layer is constructed. \r\nThe crucial components of this module are the quantization statistics that are stored in[ `self.state.SCB` ](https://github.com/TimDettmers/bitsandbytes/blob/b844e104b79ddc06161ff975aa93ffa9a7ec4801/bitsandbytes/nn/modules.py#L246). The problem when saving the `state_dict` from a `Linear8bitLt` is that it does not save these statistics that are needed at inference. So when you will load 8bit weights, the module will compute new quantization statistics based on the `int8` weights - which will lead to wrong results and computations. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi all - I'm also trying to save and re-load a BLOOM model in 8-bit format, see [https://github.com/TimDettmers/bitsandbytes/issues/80](https://github.com/TimDettmers/bitsandbytes/issues/80).\r\n\r\nI'm quite new to the topic and not sure I'm able to follow everything @younesbelkada mentioned, but my understanding is that this is not possible yet, is that correct?"
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### System Info
8x A100 GPUs with CUDA 11.3 driver
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use the following script to save a INT8 quantized and try to load it back.
```
import os
import torch
import logging
import math
from transformers import AutoConfig, pipeline, AutoModelForCausalLM, AutoTokenizer
def get_max_memory_per_gpu_dict(dtype, model_name):
"""try to generate the memory map based on what we know about the model and the available hardware"""
# figure out the memory map - the minimum per gpu required to load the model
n_gpus = torch.cuda.device_count()
try:
# model_params calculation, as we don't have a model yet to do:
# model_params = sum(dict((p.data_ptr(), p.numel()) for p in model.parameters()).values())
config = AutoConfig.from_pretrained(model_name)
h = config.hidden_size
l = config.n_layer
v = config.vocab_size
# from https://github.com/bigscience-workshop/bigscience/tree/6917a3b5fefcf439d3485ca184b4d9f6ab605150/math#model-sizing
model_params = l * (12 * h ** 2 + 13 * h) + v * h + 4 * h
except:
logging.info(f"The model {model_name} has a broken config file. Please notify the owner")
raise
if dtype == torch.int8:
bytes = 1
else:
bytes = torch.finfo(dtype).bits / 8
param_memory_total_in_bytes = model_params * bytes
# add 5% since weight sizes aren't the same and some GPU may need more memory
param_memory_per_gpu_in_bytes = int(param_memory_total_in_bytes / n_gpus * 1.10)
logging.info(f"Estimating {param_memory_per_gpu_in_bytes / 2 ** 30:0.2f}GB per gpu for weights")
# check the real available memory
# load cuda kernels first and only measure the real free memory after loading (shorter by ~2GB)
torch.ones(1).cuda()
max_memory_per_gpu_in_bytes = torch.cuda.mem_get_info(0)[0]
if max_memory_per_gpu_in_bytes < param_memory_per_gpu_in_bytes:
raise ValueError(
f"Unable to generate the memory map automatically as the needed estimated memory per gpu ({param_memory_per_gpu_in_bytes / 2 ** 30:0.2f}GB) is bigger than the available per gpu memory ({max_memory_per_gpu_in_bytes / 2 ** 30:0.2f}GB)"
)
max_memory_per_gpu = {i: param_memory_per_gpu_in_bytes for i in range(torch.cuda.device_count())}
print("Max memory per gpu:", max_memory_per_gpu)
return max_memory_per_gpu
def load_model():
world_size = torch.cuda.device_count()
model_name = "bigscience/bloom"
logging.info(f"Using {world_size} gpus")
logging.info(f"Loading model {model_name}")
tokenizer = AutoTokenizer.from_pretrained(model_name)
dtype = torch.int8
kwargs = dict(
device_map="auto",
max_memory=get_max_memory_per_gpu_dict(dtype, model_name),
)
logging.info("Using `load_in_8bit=True` to use quanitized model")
kwargs["load_in_8bit"] = True
model = AutoModelForCausalLM.from_pretrained(model_name, **kwargs)
return model, tokenizer
model, tokenizer = load_model()
model.save_pretrained("int8_model/", max_shard_size="8GB")
```
When loading from the directory, having the error on:
```
RuntimeError: Only Tensors of floating point dtype can require gradients
```
During the initialization of the model.
```
import torch
import torch.distributed as dist
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
model_name = 'int8_model/'
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.int8)
```
### Expected behavior
The loading should pass. Looking for a workaround on it...
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19480/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19479
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19479/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19479/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19479/events
|
https://github.com/huggingface/transformers/pull/19479
| 1,403,968,972
|
PR_kwDOCUB6oc5AiSkq
| 19,479
|
Fix `OPTForQuestionAnswering` doctest
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
The checkpoint has no QA head, so we need to set seed and change the expected values.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19479/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19479",
"html_url": "https://github.com/huggingface/transformers/pull/19479",
"diff_url": "https://github.com/huggingface/transformers/pull/19479.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19479.patch",
"merged_at": 1665511985000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19478
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19478/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19478/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19478/events
|
https://github.com/huggingface/transformers/issues/19478
| 1,403,902,175
|
I_kwDOCUB6oc5Trdjf
| 19,478
|
openai whisper ASR pytorch to tflite
|
{
"login": "nyadla-sys",
"id": 26728802,
"node_id": "MDQ6VXNlcjI2NzI4ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/26728802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nyadla-sys",
"html_url": "https://github.com/nyadla-sys",
"followers_url": "https://api.github.com/users/nyadla-sys/followers",
"following_url": "https://api.github.com/users/nyadla-sys/following{/other_user}",
"gists_url": "https://api.github.com/users/nyadla-sys/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nyadla-sys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nyadla-sys/subscriptions",
"organizations_url": "https://api.github.com/users/nyadla-sys/orgs",
"repos_url": "https://api.github.com/users/nyadla-sys/repos",
"events_url": "https://api.github.com/users/nyadla-sys/events{/privacy}",
"received_events_url": "https://api.github.com/users/nyadla-sys/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"@patrickvonplaten @sgugger @amyeroberts @ArthurZucker :Could anyone of you help me on this ?",
"Below notebopok generates tflite file ,however i have not validated with real speech input \r\nhttps://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/tflite_from_huggingface_whisper.ipynb",
"Can anyone help me to run inference on this notebook to validate generated tflite file ",
"@ydshieh Could you please help me to validate tflite file that got generated using below google notebook \r\nhttps://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/tflite_from_huggingface_whisper.ipynb",
"@nyadla-sys The TF Whisper is just released, maybe you can try with it.\r\n\r\nHowever, our team is still working on some TensorFlow model saving issues, so I don't know if the conversion/inference will work out of the box.",
"@ydshieh ran inference on converted tflite file and it doesn't work.",
"@ydshieh \r\nI could generate encoder int8 tflite model from openai->whisper(pytorch) and ran inference.\r\nhttps://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/generate_tflite_from_whisper.ipynb\r\ncan someone help me to generate int8 decoder tflite model from openai->whisper(pytorch) and run inference?"
] | 1,665
| 1,670
| null |
NONE
| null |
### Model description
I'm trying to figure out how to create tflite models(int8/float32 ) for OpenAI->Whisper ASR model (Tiny.en.pt)
Somehow below generated tflite file getting crashed while running inference
https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/tinynn_pytorch_to_tflite_int8.ipynb
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19478/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/19477
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19477/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19477/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19477/events
|
https://github.com/huggingface/transformers/pull/19477
| 1,403,875,946
|
PR_kwDOCUB6oc5Ah_gG
| 19,477
|
Adding the state-of-the-art contrastive search decoding methods for the codebase of generation_utils.py
|
{
"login": "gmftbyGMFTBY",
"id": 27548710,
"node_id": "MDQ6VXNlcjI3NTQ4NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/27548710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmftbyGMFTBY",
"html_url": "https://github.com/gmftbyGMFTBY",
"followers_url": "https://api.github.com/users/gmftbyGMFTBY/followers",
"following_url": "https://api.github.com/users/gmftbyGMFTBY/following{/other_user}",
"gists_url": "https://api.github.com/users/gmftbyGMFTBY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gmftbyGMFTBY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmftbyGMFTBY/subscriptions",
"organizations_url": "https://api.github.com/users/gmftbyGMFTBY/orgs",
"repos_url": "https://api.github.com/users/gmftbyGMFTBY/repos",
"events_url": "https://api.github.com/users/gmftbyGMFTBY/events{/privacy}",
"received_events_url": "https://api.github.com/users/gmftbyGMFTBY/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger @patrickvonplaten context: this is the implementation by the authors of [this NeurIPS paper](https://arxiv.org/abs/2202.06417), as first proposed in #19182 -- a new generation strategy with very interesting results!",
"Hi, @sgugger, thank you so much for your suggestions. I will fix these problems quickly!",
"Hello @sgugger, is there any document or introduction for auto-APIs?",
"The [documentation](https://huggingface.co/docs/transformers/model_doc/auto) would be the place to start. You can also look at all other examples!",
"Hello, @sgugger, I have fixed the problems based on your valuable suggestions! Besides, I have updated the test scripts to the auto-APIs of inference.\r\n\r\nThe command line to run this test script can be found in its docstring.",
"Hello @sgugger, I have fixed the problems based on your suggestions.",
"Hey @gmftbyGMFTBY,\r\n\r\nSuper nice PR! Contrastive search decoding would be a great addition to Transformers. I feel a bit uneasy of the logic in src/transformers/generation_contrastive_search.py and would prefer to not add a new file here and instead try to adapt the code more to our existing design (try to make use of logits processor, call the model **only** in the loop and not before).\r\n\r\nWe usually haven't added any new files for generation and molding the design more into what already exists would help a lot with maintainability. \r\nAs an example for constrained decoding, we sadly don't manage to maintain the function anymore (see https://github.com/huggingface/transformers/pull/17920) because the code was too different & too hard for us to maintain (IMO).\r\n\r\nIf possible I'd advocate quite strongly for trying to make the PR a bit more compatible with our current design (it shouldn't require much work IMO).\r\n\r\n@gante @sgugger what do you think? ",
"I don't know the generate code well enough to evaluate how easy/hard it would be for this PR to fit in the existing design. I solely gave my approval based on the fact it was mimicking something that existed with constraint decoding. Sorry, I wasn't aware it wasn't maintained anymore.\r\n\r\nWill know for the next time :-)\r\n\r\n",
"@patrickvonplaten replying here since I believe the multiple important points you raised are related. This will be a bit long, but bear with me - I think we can improve the quality of the PR before it gets merged π \r\n\r\n1. `ContrastiveDecodingOneStepFast` (which should be snake cased, perhaps into `contrastive_decoding_step`) being a stand-alone function - it could be part of the loop, yes, but this is a long operation that in essence replaces picking the argmax (in `greedy_search`) or sampling (in `sample`). I agree it should not be public, but I believe we would stand to gain from a readability standpoint if we separate it somehow -- perhaps a private method in the MixIn or an inner function to `contrastive_search`? These two options would also mean that we don't need to pass `model` as an argument, which is indeed awkward.\r\n2. `ranking_fast` being a `LogitsProcessor` - I would disagree here, despite being an operation that processes logits. In addition to breaking the fairly stable `call()` API with two new tensor inputs (`context_hidden` and `next_hidden`), it can only be used with `contrastive_search`. As such, I think we would only be adding a confusing `LogitsProcessor` to the public API.\r\n3. Despite the above, I agree that we should pass and compute the logits processors before the decoding function, like the other `generate` methods -- this was an oversight on my end at review time! As for `top_k`, I have no strong feelings - on one hand, `contrastive search` is not meant to be used with other `LogitWarpers` (like `TemperatureLogitsWarper`), on the other hand there is probably no harm in using `LogitWarpers`.\r\n\r\n(now the most challenging issue in design, IMO)\r\n\r\n4. A naive implementation of `contrastive_search` needs two sets of forward passes per token -- once to get the `top_k` candidate tokens, another to get the future hidden states for each candidate (to be used as a filter). We can avoid this 2x cost if we pipe future `past_key_values` corresponding to the selected candidate into the actual `past_key_values`. It does mean, however, that the first token requires an additional forward pass. The current code does it by placing `prepare_inputs_for_generation()` (and a few other ops) in unconventional places, before the loop and at the end of the loop. Perhaps it would be better if we kept the original structure, but added an `if` at the start loop -- if we are in the first iteration, do this set of operations.\r\n\r\nWDYT about these 4 points?",
"Thanks for the summary @gante !\r\n\r\nRegarding the 4 points above:\r\n\r\n1.) I think the `ContrastiveDecodingOneStepFast` is the core of the contrastive generation algorithm and I don't see any advantage having it it's own \"sub-function\" (it'll never be called from another function than `contrastive_search`). Also, at the moment, the method returns 6 tensors and accepts 8 or so arguments so it's much more than just replacing the greedy argmax operation IMO. To begin with, I think it'd be very helpful to just copy-paste all the code into the `contrastive_search` and then later down the road I think we could see if parts of it could be moved out (don't think it's necessary though).\r\n\r\n2.) I think we would have the following gains to having it implemented as a logit processor:\r\n- We understand the method better & easier to maintain. If we mold `ranking` fast into a logit processor, it's much easier to understand it\r\n- Very easy to test this method\r\n- It is to me a method that processes logits -> so logically it should be a logits processor to me\r\n- Now regarding the API, IMO `logits_processor` inputs are not just restricted to `input_ids` and `scores`. IMO, the API is rather (`input_ids`, `scores`, <other-args-that-are-needed>) => `scores` . Also, we've already \"broken\" the \"input_ids\" & \"scores\" - only API here: https://github.com/huggingface/transformers/blob/d2e5b19b821f0cf43c7cf4f01be5faa1cb20aa64/src/transformers/generation_utils.py#L2989 \r\n=> I understand your point here and ok for me to not make it a logits processor, but IMO it would be better/cleaner\r\n3.) Yes I think it's very important to align the structure of the generate method as much as possible to the other ones\r\n4.) Totally fine for me to have an extra forward pass in the beginning, it's just important that we make the structure the same as greedy or sample (I don't have a problem with if statements here at all). For maintenance and also to understand the method, it's very important IMO that one could compare greedy search and constrastive search line-by-line and then see quickly how the two methods differ",
"Thank you for your valuable responses! I will revise this PR according to your reviews.",
"@gmftbyGMFTBY let's go with the core of @patrickvonplaten's suggestions. Patrick and I also talked on Slack to align a few details regarding `ranking_fast` :D\r\n\r\nHere's a summary of the main changes we are requesting, to ensure the code you're adding remains easy to maintain:\r\n1. Move the contents of `ContrastiveDecodingOneStepFast` to where the function is called, as opposed to being a function call;\r\n2. Keep `ranking_fast` as it is now (i.e. NOT a logits processor);\r\n3. Let's apply the logits processors at the start of each iteration, and move `top_k` to the logits warpers (like `sample` [does it](https://github.com/huggingface/transformers/blob/d2e5b19b821f0cf43c7cf4f01be5faa1cb20aa64/src/transformers/generation_utils.py#L2059)). This ensures that there are minimal differences between generation strategies;\r\n4. Let's rearrange the order of operations such that all model forward passes happen inside the generation loop (with an `if` for the operations that are only supposed to happen on the first iterations).\r\n\r\nHere is a diagram with the expected code structure, to ensure we are all on the same page: https://miro.com/app/board/uXjVPMiVFFg=/\r\n\r\nFinally, thank you for your cooperation with us π This back and forth may be a bit frustrating, but it will ensure your contribution will be long-lived!\r\n",
"@gante Nice! but I just saw your message after I finished writing the `logits_processor` corresponding to `ranking_fast` function.\r\nThe followings are the implementation of the `ranking_fast`'s `logits_processor`:\r\n\r\n\r\nIt is initialized in `_get_logits_processor` function:\r\n\r\n\r\nand will be called:\r\n\r\n\r\nCan I keep this implementation or just still employ the `ranking_fast` function?\r\n\r\n",
"@gante Oh, I got your reason about not using the `logits_processor`. I will follow the instruction in the `sample` function!",
"The revisions have been updated!",
"Hello, @gante, I have updated the PR based on your suggestions.",
"Oh, I am still working on the integration test.",
"@gmftbyGMFTBY you probably have to add the `@slow` decorator to the test, and run it locally with `RUN_SLOW=1 py.test (...)` to confirm that it is working. \r\n\r\nOur CI doesn't run tests with `@slow` on push (and fails if the test doesn't have the decorator and is actually slow), but we run them every 24h and track them internally :)",
"Ok, I got it!\r\n",
"Okay, I am working on it! Thanks a lot for your reviews!",
"BTW @gmftbyGMFTBY,\r\n\r\nJust read a through your extremely nice issue! It seems like you experimented with OPT as well, so maybe let's add a test for OPT as well then ? :-) OPT's `past_key_values` are slightly different compared to GPT2's `past_key_values` so maybe instead of adding a test for GPT-J and GPT-2, it would make more sense to add a test for OPT in addition to GPT2?\r\n\r\nAlso, if the paper is only concerned with open-ended generation (so less with encoder-decoder architectures), I'm also totally fine with **not** testing for T5 and BART (it's a nice to have, but if it takes too much time and it's not too important - happy to skip it!).\r\n\r\nRegarding the fast dummy test, could you maybe make use of those dummy models: \r\n- https://huggingface.co/hf-internal-testing/tiny-random-gpt2\r\n- https://huggingface.co/hf-internal-testing/tiny-random-gptj\r\n- https://huggingface.co/hf-internal-testing/tiny-random-t5\r\n- https://huggingface.co/hf-internal-testing/tiny-random-bart\r\n\r\nThe tests colud look very similar to:\r\nhttps://github.com/huggingface/transformers/blob/71ca79448cd334970fa2893f4faaa094ca13ca6f/tests/generation/test_generation_utils.py#L2053\r\n\r\njust much shorter, *i.e.* they only need to test for shape equality. ",
"Yeah, we have already tested the OPT models, and it works fine. I will supply more tests to the pre-trained models that you mentioned.",
"@patrickvonplaten more tests about these models are added: \r\n* gpt2-large\r\n* gpt-j (EleutherAI/gpt-j-6B)\r\n* opt (facebook/opt-6.7b)\r\n* BART (facebook/bart-large-cnn)\r\n* T5 (flax-community/t5-base-cnn-dm)\r\n\r\nThese tests are passed successfully. Can you do the final check about this PR?",
"Thank you for being part of this process @gmftbyGMFTBY π All queries have been addressed and the PR looks in a good state, merging! ",
"@gante @patrickvonplaten @sgugger Wow, Thank you very much for your help and support. Love huggingface team! ",
"@gante @patrickvonplaten @sgugger -- Many thanks for your kind help throughout the process! It means a great deal to me and @gmftbyGMFTBY. Huggingface is the best!",
"Great work @gmftbyGMFTBY and @yxuansu, thanks for bearing with us through the PR :-) "
] | 1,665
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# Adding the state-of-the-art contrastive search decoding method for the `generation_utils` codebase
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19182
In this PR, I add the source codes of our proposed state-of-the-art decoding methods for the off-the-shelf neural text generation models. The main changes are in the following files: (1) `src/transformers/generation_utils.py`; (2) `examples/pytorch/text-generation/run_generation_contrastive_search.py`. To run the test script, please follow these commands:
```bash
cd examples/pytorch/text-generation;
CUDA_VISIBLE_DEVICES=0 python run_generation_contrastive_search.py --model_type=gpt2 --model_name_or_path=gpt2-large
```
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] The PR has been well discussed in [19182](https://github.com/huggingface/transformers/issues/19182)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? Yes, I have written the test scripts for the [contrastive search](https://github.com/gmftbyGMFTBY/transformers/blob/csearch-pr-v2/examples/pytorch/text-generation/run_generation_contrastive_search.py)
## Who can review?
According to the suggestions of @gante, @patrickvonplaten and @sgugger can review this PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19477/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19477",
"html_url": "https://github.com/huggingface/transformers/pull/19477",
"diff_url": "https://github.com/huggingface/transformers/pull/19477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19477.patch",
"merged_at": 1666171066000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19476
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19476/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19476/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19476/events
|
https://github.com/huggingface/transformers/pull/19476
| 1,403,678,026
|
PR_kwDOCUB6oc5AhVOF
| 19,476
|
[REIMPLEMETATION] Vision encoder decoder Onnx conversion
|
{
"login": "WaterKnight1998",
"id": 41203448,
"node_id": "MDQ6VXNlcjQxMjAzNDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/41203448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WaterKnight1998",
"html_url": "https://github.com/WaterKnight1998",
"followers_url": "https://api.github.com/users/WaterKnight1998/followers",
"following_url": "https://api.github.com/users/WaterKnight1998/following{/other_user}",
"gists_url": "https://api.github.com/users/WaterKnight1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WaterKnight1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WaterKnight1998/subscriptions",
"organizations_url": "https://api.github.com/users/WaterKnight1998/orgs",
"repos_url": "https://api.github.com/users/WaterKnight1998/repos",
"events_url": "https://api.github.com/users/WaterKnight1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/WaterKnight1998/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19476). All of your documentation changes will be reflected on that endpoint.",
"Hi @WaterKnight1998 thanks for your PR! \r\n\r\nIndeed PR #19254 splits the model in separate encoder / decoder pieces, which differs to other seq2seq models that are currently implemented as a single ONNX graph. The main reason is the following:\r\n\r\n* To speed up the decoding process, it is more efficient to have a single pass through the encoder, followed by N passes through the decoder. \r\n* To support the caching of past key-value pairs, it is more efficient to have a separate decoder\r\n\r\nDo you happen to have a latency benchmark for the `VisionEncoderDecoder` export that compares your PR vs the current implementation? I think this would be the axis on which we'd consider incorporating your changes, but I would be surprised if a single graph can beat the decomposed ones.\r\n\r\nYou can find more information in our `optimum` library, where we'll be implementing the pipeline for inference: https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#export-and-inference-of-sequencetosequence-models\r\n\r\ncc @mht-sharma re the original implementation",
"> Hi @WaterKnight1998 thanks for your PR!\r\n> \r\n> Indeed PR #19254 splits the model in separate encoder / decoder pieces, which differs to other seq2seq models that are currently implemented as a single ONNX graph. The main reason is the following:\r\n> \r\n> * To speed up the decoding process, it is more efficient to have a single pass through the encoder, followed by N passes through the decoder.\r\n> * To support the caching of past key-value pairs, it is more efficient to have a separate decoder\r\n> \r\n> Do you happen to have a latency benchmark for the `VisionEncoderDecoder` export that compares your PR vs the current implementation? I think this would be the axis on which we'd consider incorporating your changes, but I would be surprised if a single graph can beat the decomposed ones.\r\n\r\nI will try to create it, but I didn't take into account what you mention. It makes sense to just do a single pass in the encoder\r\n\r\n> You can find more information in our `optimum` library, where we'll be implementing the pipeline for inference: https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#export-and-inference-of-sequencetosequence-models\r\n> \r\n\r\nI am looking forward for this implementation, I am thinking on running Donut model in production and it will be very cool, actually the inference times are pretty bad. I have an open PR for ONNX conversion: #19401\r\n\r\n",
"@lewtun is this the pipeline that you mention: https://github.com/huggingface/optimum/blob/996f209147a466c7ecf5bfb29c9fd2e9831ea3a7/optimum/onnxruntime/modeling_seq2seq.py#L154?",
"> @lewtun is this the pipeline that you mention: https://github.com/huggingface/optimum/blob/996f209147a466c7ecf5bfb29c9fd2e9831ea3a7/optimum/onnxruntime/modeling_seq2seq.py#L154?\r\n\r\nYes @WaterKnight1998, in this implementation the encoder and decoder part are exported separately and the inference is performed using ORT.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@lewtun @sgugger please reopen it ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,672
| 1,672
|
NONE
| null |
# What does this PR do?
This PR is a reimplementation of Vision Encoder Decoder Onnx conversion as a Seq2Seq Model like documentation explains: [Encoder-decoder models inherit from OnnxSeq2SeqConfigWithPast](https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/onnx#transformers.onnx.OnnxSeq2SeqConfigWithPast)
The PR #19254 didn't follow this classes. You have several examples on Repo on how to use it: https://github.com/huggingface/transformers/blob/v4.22.2/src/transformers/models/mbart/configuration_mbart.py
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@ChainYo for OnnxConfigs
@lewtun & @sgugger for approving PR: #19254
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19476/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19476",
"html_url": "https://github.com/huggingface/transformers/pull/19476",
"diff_url": "https://github.com/huggingface/transformers/pull/19476.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19476.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19475
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19475/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19475/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19475/events
|
https://github.com/huggingface/transformers/pull/19475
| 1,403,653,596
|
PR_kwDOCUB6oc5AhQCN
| 19,475
|
[Swin] Replace hard-coded batch size to enable dynamic ONNX export
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(a bit off-topic, but still related question)\r\n\r\nViewing the issue and the fix provided this PR, I was thinking we would have a lot of the same errors due to this `hard-coded batch size`. However, when I check `bert`:\r\nhttps://github.com/huggingface/transformers/blob/10100979ed0594d4cfe1982cdfac9642a68e473e/src/transformers/models/bert/modeling_bert.py#L962\r\nand\r\nhttps://github.com/huggingface/transformers/blob/10100979ed0594d4cfe1982cdfac9642a68e473e/src/transformers/models/bert/modeling_bert.py#L968\r\n\r\nbut the onnx tests still pass for `bert`. Are `batch_size` and `seq_length` not hard-coded here? Just wondering if @lewtun has already some insight regarding this.\r\n\r\n",
"> Is the issue caused by the changes in #19255? More precisely, from (newly added code) https://github.com/dwyatte/transformers/blob/949683675d83cc38620106626822279cd45b076b/src/transformers/onnx/convert.py#L368\r\n> \r\n> The error shows `Outputs values doesn't match between reference model and ONNX exported model` - it must be non-trivi al to figure out this is coming from the shape things! How are you able to find out π― ? Is there some tool we can use to check things (tensor values/shape) when running onnx inference?\r\n\r\nYes, this issue was surfaced by #19255, which implemented a stronger validation test on exported ONNX models. Basically, it generates the ONNX graph using dummy data with one batch size `b`, and then validates the forward pass with a different `b'`. \r\n\r\nThe reason it can be non-trivial to figure out when an export fails to have agreement between the PyTorch / ONNX models is that ONNX traces a graph based on dummy data, and this tracing can be incorrect if there are data-dependent flow statements (Swin in particular has a lot of these if/else statements). Currently , the best tool I know of is to visualise the graph with [Netron](https://netron.app/) and manually inspect for discrepancies.\r\n\r\n> Viewing the issue and the fix provided this PR, I was thinking we would have a lot of the same errors due to this `hard-coded batch size`. However, when I check `bert`:\r\n\r\nI think in those cases we don't hit a problem because `batch_size` is only used to create the attention mask when none is provided. Since our dummy input provides an attention mask, this flow in the graph is never traced AFAICT "
] | 1,665
| 1,665
| 1,665
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR tweaks the modeling code of `swin` to enable dynamic batch sizes with the ONNX export. With this fix, the ONNX slow tests for this model now pass, including the slow tests for the original PyTorch model:
```
This passes
RUN_SLOW=1 pytest -x -sv tests/models/swin/test_modeling_swin.py
This also passes
RUN_SLOW=1 pytest -x -sv tests/onnx/test_onnx_v2.py -k "swin"
```
Since this change also impacts other models, I've also checked the modeling slow tests pass for:
- [x] `maskformer`
- [x] `donut_swin`
- [x] `swin_v2`
Related to https://github.com/huggingface/transformers/issues/17476
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19475/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19475",
"html_url": "https://github.com/huggingface/transformers/pull/19475",
"diff_url": "https://github.com/huggingface/transformers/pull/19475.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19475.patch",
"merged_at": 1665494490000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19474
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19474/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19474/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19474/events
|
https://github.com/huggingface/transformers/issues/19474
| 1,403,632,294
|
I_kwDOCUB6oc5Tqbqm
| 19,474
|
Sample method doesn't work for mt5 architecture
|
{
"login": "tatiana-iazykova",
"id": 70767376,
"node_id": "MDQ6VXNlcjcwNzY3Mzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/70767376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tatiana-iazykova",
"html_url": "https://github.com/tatiana-iazykova",
"followers_url": "https://api.github.com/users/tatiana-iazykova/followers",
"following_url": "https://api.github.com/users/tatiana-iazykova/following{/other_user}",
"gists_url": "https://api.github.com/users/tatiana-iazykova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tatiana-iazykova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tatiana-iazykova/subscriptions",
"organizations_url": "https://api.github.com/users/tatiana-iazykova/orgs",
"repos_url": "https://api.github.com/users/tatiana-iazykova/repos",
"events_url": "https://api.github.com/users/tatiana-iazykova/events{/privacy}",
"received_events_url": "https://api.github.com/users/tatiana-iazykova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"moreover, if I try to use generate with num_beams=1 and do_sample=True as it says in documentation, I got really weird scores:\r\n\r\n\r\ninp = batch # result of next(item(torch.utils.Dataloader))\r\n\r\ngenerate_results = model.generate(\r\n input_ids=inp['input_ids'].to(model.device),\r\n attention_mask=inp['attention_mask'].to(model.device),\r\n output_scores=True,\r\n num_beams=1,\r\n do_sample=True,\r\n return_dict_in_generate=True,\r\n )\r\n\r\n```\r\nSampleEncoderDecoderOutput(sequences=tensor([[ 0, 259, 264, 66689, 31018, 2793, 79143, 1334, 259,\r\n 3205, 1264, 1285, 2776, 179124, 192823, 149529, 13647, 260,\r\n 1, 0],\r\n [ 0, 563, 3392, 259, 37446, 4205, 59633, 299, 19966,\r\n 259, 20364, 484, 261, 259, 21230, 332, 287, 9983,\r\n 9844, 260],\r\n [ 0, 92868, 111621, 1498, 543, 68093, 259, 23886, 446,\r\n 22708, 261, 259, 3266, 259, 31989, 411, 4338, 3695,\r\n 18203, 388],\r\n [ 0, 2553, 1049, 7584, 8608, 3671, 261, 892, 6888,\r\n 3647, 7077, 456, 23480, 729, 23761, 1128, 260, 1,\r\n 0, 0],\r\n [ 0, 4343, 729, 79829, 261, 3553, 12799, 261, 259,\r\n 279, 33658, 433, 30643, 308, 116992, 34741, 259, 95298,\r\n 388, 10081],\r\n [ 0, 259, 73394, 304, 459, 54650, 261, 1361, 24856,\r\n 730, 169977, 1, 0, 0, 0, 0, 0, 0,\r\n 0, 0],\r\n [ 0, 25870, 261, 259, 109102, 261, 2477, 277, 270,\r\n 3256, 416, 259, 185416, 521, 260, 260, 4837, 609,\r\n 277, 263],\r\n [ 0, 259, 74815, 688, 13986, 657, 425, 259, 46805,\r\n 7378, 259, 30821, 877, 816, 267, 1, 0, 0,\r\n 0, 0],\r\n [ 0, 259, 279, 259, 74725, 2266, 259, 71145, 748,\r\n 274, 24186, 261, 37828, 324, 1, 0, 0, 0,\r\n 0, 0],\r\n [ 0, 259, 176741, 11966, 425, 8790, 1344, 43078, 543,\r\n 259, 279, 259, 98503, 1400, 259, 30821, 274, 83935,\r\n 1633, 13637],\r\n [ 0, 486, 10753, 263, 1432, 344, 1537, 1459, 27906,\r\n 1537, 2985, 261, 1866, 569, 6535, 339, 259, 45628,\r\n 281, 287],\r\n [ 0, 336, 3031, 272, 277, 270, 1689, 4065, 288,\r\n 342, 714, 260, 1, 0, 0, 0, 0, 0,\r\n 0, 0]], device='cuda:0'), scores=(tensor([[-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf]], device='cuda:0'), tensor([[-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf]], device='cuda:0'), tensor([[-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf]], device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -4.1323, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -6.2682, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [-0.6975, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [-inf, -inf, -inf, ..., -inf, -inf, -inf]], device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -4.8971, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -7.8711, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -3.7007, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [-0.7802, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -6.0860, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -3.4524, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -6.1068, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -6.8437, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -0.4513, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -4.2931, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, 7.0613, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -8.3437, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [14.4250, 1.6957, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -8.2820, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [15.0611, -inf, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [15.2473, -inf, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -0.5828, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [15.2603, 2.0967, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, -2.9308, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [14.6447, 1.9284, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[ -inf, 4.6353, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -8.9710, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -inf, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -6.0908, -inf, ..., -inf, -inf, -inf],\r\n [14.7882, 1.9284, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0'), tensor([[15.3742, 2.2133, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -0.5673, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -3.3424, -inf, ..., -inf, -inf, -inf],\r\n ...,\r\n [ -inf, -2.2742, -inf, ..., -inf, -inf, -inf],\r\n [ -inf, -6.5111, -inf, ..., -inf, -inf, -inf],\r\n [15.1830, 2.0473, -inf, ..., -inf, -inf, -inf]],\r\n device='cuda:0')), encoder_attentions=None, encoder_hidden_states=None, decoder_attentions=None, cross_attentions=None, decoder_hidden_states=None)\r\n```",
"@gante or @ArthurZucker could you take this one? :-) ",
"Hi @tatiana-iazykova π \r\n\r\nRegarding the issue in the first post: the problem is that the docstring example (that you based your script on) only works for decoder-only models, and an encoder-decoder model is used. For encoder-decoder models, the input needs some additional processing, that `generate()` gracefully handles [here](https://github.com/huggingface/transformers/blob/f4ef78af543a166551889da8737cc3134a7d9dd3/src/transformers/generation_utils.py#L1281). While we *could* make `model.sample()` and other generation strategies handle this sort of cases, the code will quickly become a pain to maintain (input handling would have to be added everywhere), so I'm inclined not to make any attempt to fix and to redirect towards `model.generate()` instead :) (cc @patrickvonplaten -- maybe we should add a line in the docstrings to prioritize the use of `.generate()`?)\r\n\r\nAs for the second issue -- can you share a reproducible script? π The scores would look normal if a logits processor like top_k was used, but it doesn't seem to be the case π€ ",
"Noted.\r\n\r\nAs for the second issue, the input was standard like the one specified here (https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb#scrollTo=kTCFado4IrIc)\r\n\r\nand code for generate was the following:\r\n\r\n```python\r\n\r\ngenerate_results = model.generate(\r\n input_ids=inp['input_ids'].to(model.device),\r\n attention_mask=inp['attention_mask'].to(model.device),\r\n output_scores=True,\r\n num_beams=1,\r\n do_sample=True,\r\n return_dict_in_generate=True,\r\n)\r\n```",
"@tatiana-iazykova I see -- in the example you shared, `top_k` is not passed, so it inherits the [default value](https://huggingface.co/docs/transformers/v4.23.1/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.top_k) (50). `top_k` sets the log probability of all but the K most likely tokens in `-inf`, which explains the numbers you see :)\r\n\r\nAs a counter example, you can turn `top_k` off by setting it to the size of the vocabulary:\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\nmodel_name = \"distilgpt2\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name)\r\n\r\ninputs = tokenizer(\"This is a simple test\", return_tensors=\"pt\")\r\ngenerate_outputs = model.generate(\r\n input_ids=inputs['input_ids'],\r\n attention_mask=inputs['attention_mask'],\r\n output_scores=True,\r\n num_beams=1,\r\n do_sample=True,\r\n return_dict_in_generate=True,\r\n top_k=tokenizer.vocab_size\r\n)\r\nprint(generate_outputs)\r\n```",
"I'm closing the issue as this is intended behavior, but feel free to reopen with further queries :)"
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
### System Info
- `transformers` version: 4.21.3
- Platform: Linux-5.4.0-122-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
MT5 sample method @patrickvonplaten
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The following code (from https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.sample)
```python
from transformers import (
MT5Tokenizer,
MT5ForConditionalGeneration,
LogitsProcessorList,
MinLengthLogitsProcessor,
TopKLogitsWarper,
TemperatureLogitsWarper,
StoppingCriteriaList,
MaxLengthCriteria,
)
import torch
tokenizer = MT5Tokenizer.from_pretrained("google/mt5-base")
model = MT5ForConditionalGeneration.from_pretrained("google/mt5-base")
# set pad_token_id to eos_token_id because GPT2 does not have a EOS token
input_prompt = "Today is a beautiful day, and"
input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids
# instantiate logits processors
logits_processor = LogitsProcessorList(
[
MinLengthLogitsProcessor(15, eos_token_id=model.config.eos_token_id),
]
)
# instantiate logits processors
logits_warper = LogitsProcessorList(
[
TopKLogitsWarper(50),
TemperatureLogitsWarper(0.7),
]
)
stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)])
torch.manual_seed(0)
outputs = model.sample(
input_ids,
logits_processor=logits_processor,
logits_warper=logits_warper,
stopping_criteria=stopping_criteria,
)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
results in `ValueError: You have to specify either input_ids or inputs_embeds`
Tried to explicitly set decoder_input_ids but it didn't help
### Expected behavior
Tried to use sample method instead of generate and failed to get rid of value error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19474/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19473
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19473/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19473/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19473/events
|
https://github.com/huggingface/transformers/pull/19473
| 1,403,554,720
|
PR_kwDOCUB6oc5Ag6rv
| 19,473
|
Fix `XGLMModelLanguageGenerationTest.test_batched_nan_fp16`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger OK, let me check if I can do something for (not a real) weights defined by\r\n\r\n```python\r\nself.register_buffer(\"weights\", emb_weights)\r\n\r\n```"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
#18057 added this test to test running with fp16.
However, `from_pretrained(model_name, torch_dtype=torch.float16` seems **not able to change the dtype** for weights registered below:
https://github.com/huggingface/transformers/blob/a7bc4221c0c09857b30ac467e7de86d3f5a7c482/src/transformers/models/xglm/modeling_xglm.py#L168-L176
and `hidden_states` becomes again `float32` (because `position` is) at
https://github.com/huggingface/transformers/blob/a7bc4221c0c09857b30ac467e7de86d3f5a7c482/src/transformers/models/xglm/modeling_xglm.py#L715
and finally failed at `hidden_states = self.self_attn_layer_norm(hidden_states)` with
```bash
RuntimeError: expected scalar type Float but found Half
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19473/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19473",
"html_url": "https://github.com/huggingface/transformers/pull/19473",
"diff_url": "https://github.com/huggingface/transformers/pull/19473.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19473.patch",
"merged_at": 1665497140000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19472
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19472/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19472/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19472/events
|
https://github.com/huggingface/transformers/pull/19472
| 1,403,537,970
|
PR_kwDOCUB6oc5Ag3GN
| 19,472
|
Update `WhisperModelIntegrationTests.test_large_batched_generation`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Update the expected values for `WhisperModelIntegrationTests::test_large_batched_generation`.
It is probably due to the different GPUs used.
See currently failing test [here](https://github.com/huggingface/transformers/actions/runs/3212658214/jobs/5251735170).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19472/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19472",
"html_url": "https://github.com/huggingface/transformers/pull/19472",
"diff_url": "https://github.com/huggingface/transformers/pull/19472.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19472.patch",
"merged_at": 1665491965000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19471
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19471/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19471/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19471/events
|
https://github.com/huggingface/transformers/issues/19471
| 1,403,528,341
|
I_kwDOCUB6oc5TqCSV
| 19,471
|
Suppress warning when using `DataCollatorForSeq2Seq`
|
{
"login": "pietrolesci",
"id": 61748653,
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pietrolesci",
"html_url": "https://github.com/pietrolesci",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sgugger ",
"Could you post a clear reproducer of the issue? Thanks a lot!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Could you post a clear reproducer of the issue? Thanks a lot!\r\n\r\nWhen fine tuning a DialoGPT-medium model with a custom Dataset class as so:\r\n\r\n```\r\nclass ConversationDataset(Dataset):\r\n def __init__(self, tokenizer, file_path, block_size):\r\n self.tokenizer = tokenizer\r\n self.block_size = block_size\r\n self.inputs = []\r\n self.responses = []\r\n\r\n with open(file_path, 'r', encoding=\"utf-8\") as file:\r\n lines = file.readlines()\r\n for i in range(0, len(lines), 2):\r\n if i + 1 < len(lines):\r\n self.inputs.append(lines[i].strip().replace(\"input: \", \"\"))\r\n self.responses.append(lines[i + 1].strip().replace(\"response: \", \"\"))\r\n\r\n # Tokenize and pad inputs and responses in a single step\r\n self.input_tensors = tokenizer(self.inputs, return_tensors='pt', padding=True, truncation=True, max_length=self.block_size)\r\n self.response_tensors = tokenizer(self.responses, return_tensors='pt', padding=True, truncation=True, max_length=self.block_size)\r\n```",
"This is being fixed by #23742"
] | 1,665
| 1,685
| 1,668
|
CONTRIBUTOR
| null |
### Feature request
When using a fast tokenizer in the `DataCollatorForSeq2Seq`, currently the following warning is printed
```
You're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
```
As a temporary solution, what is the suggested way to suppress it?
Is there any plan to update `DataCollatorForSeq2Seq`?
Thanks a lot in advance for your help!
### Motivation
N/A
### Your contribution
N/A
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19471/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19470
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19470/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19470/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19470/events
|
https://github.com/huggingface/transformers/pull/19470
| 1,403,489,913
|
PR_kwDOCUB6oc5AgsxD
| 19,470
|
CLI: add import protection to datasets
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
MEMBER
| null |
# What does this PR do?
Add import protection to datasets -- I've seen two or three recent issues where the authors can't post their env due to `datasets` failing to import on `pt-to-tf` ([example](https://github.com/huggingface/transformers/issues/19445))
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19470/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19470/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19470",
"html_url": "https://github.com/huggingface/transformers/pull/19470",
"diff_url": "https://github.com/huggingface/transformers/pull/19470.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19470.patch",
"merged_at": 1665490772000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19469
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19469/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19469/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19469/events
|
https://github.com/huggingface/transformers/pull/19469
| 1,403,462,502
|
PR_kwDOCUB6oc5Agm4n
| 19,469
|
Fix `FlaubertTokenizer.__init__`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
There was a tiny error in #19330
```python
do_lowercase=do_lowercase**kwargs,
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19469/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19469",
"html_url": "https://github.com/huggingface/transformers/pull/19469",
"diff_url": "https://github.com/huggingface/transformers/pull/19469.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19469.patch",
"merged_at": 1665426924000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19468
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19468/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19468/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19468/events
|
https://github.com/huggingface/transformers/pull/19468
| 1,403,438,624
|
PR_kwDOCUB6oc5AghvK
| 19,468
|
Add warning in `generate` & `device_map=auto` & half precision models
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for the feedback @Narsil ! \r\n\r\nRegarding `accelerate` I am unsure about the magic that happens there and if it is fixable without breaking anything. If there is something I can fix from `accelerate` happy to open a PR there! Gently pinging @sgugger and @muellerzr to see how we can fix that from `accelerate`\r\n\r\nI think that `next(self.parameters())` should do the trick too, regarding the ordering in pytorch, I have observed a similar phenomenon in https://github.com/huggingface/transformers/pull/18312 / when printing a module the output is sensitive to the order of each submodule. I think that `module.parameters()` uses the same logic as `print` -> it uses the `self._modules` attribute from [`nn.Module`](https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module).\r\nSo I believe that if a model has been defined correctly (ie, `Embedding` layer at the beginning of the module) `next(self.parameters())` should return the parameter of the `Embedding` layer. This is an example script I used to confirm my intuition but small details that I might be missing can change my hypothesis!\r\n```\r\n>>> model = nn.Sequential(\r\n... nn.Embedding(1, 2),\r\n... nn.Linear(2, 3),\r\n... nn.Linear(3, 4),\r\n... )\r\n>>> list(model.parameters())[0]\r\nParameter containing:\r\ntensor([[0.1374, 1.0764]], requires_grad=True)\r\n```\r\nBut maybe there is a better way to do so! I'll check what is done on `accelerate`.\r\n\r\nAlso in the worst case we can just leave the warning message only, and force the user to pass an `input_ids` that is on the same device as the model!",
"_The documentation is not available anymore as the PR was closed or merged._",
"This should be neither in Accelerate nor here in my opinion. There is a legitimate use case where you want to:\r\n- make the forward passes on your GPU\r\n- have the generation stuff happen on CPU\r\nbecause you lack GPU memory (for instance). This is why Accelerate does not force the inputs to be on the same device as the model.\r\n\r\nWe can add a warning on the Transformer side, but we shouldn't force the inputs to change device. Just tell the user in case they did a mistake and that generation might be slow.",
"Perfect thanks! \r\nFine with this change! I have kept the warning message and reverted the force assignment in aaf6ecb4d2f8872204f7ad80b38f21ac04b7b267 Let me know what do you think ;) !\r\n\r\nWe should also question ourseleves whether we put the warning inside `sample` or more generally in `generate` before calling `sample` ",
"Perfect thanks! Adapted the changes in 2a89fcd50e18d0da3bec56e878378f513bbed0c1 Will merge once it's green ;) \r\nThanks a lot @Narsil & @sgugger !",
"I am just reading this discussion for learning purpose, as you all know better than me on this stuff. But @younesbelkada, yoc mentioned\r\n\r\n> When instantiating a model using device_map=auto and for half-precision models, the model returns the logits on the same device as the input.\r\n\r\nI feel a bit confused: for models running in float32, **does the model return the logits on the same device as the input too**?\r\nI know for float32, we don't have issue on CPU. But I am just wondering the relationship between `dtype` and `the same device`.",
"Thanks for bringing up my comment @ydshieh !\r\nI think my comment is slightly misleading here, it indeed returns the logits on the same device as the `input_ids` even if the model is in float32. The thing is, some pytorch operations such as `topk` are supported under CPU for float32 but are not supported for float16. So there is no relationship between `dtype` and `the same device` - if you load a model using `device_map=auto` the forward pass of the model will return the output on the same device as the input! \r\nWill update my comment for clarification! Let me know if anything else is unclear ",
"No problem @younesbelkada. I was probably focusing on the words too much - bad (sometimes) habits :-)",
"Ahah don't worry at all! πͺ Agree that my previous statement was super confusing"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes mainly 2 issues that are :
- https://github.com/TimDettmers/bitsandbytes/issues/42
- https://github.com/huggingface/transformers/issues/19445
This issue seems to be unrelated to `bitsandbytes` 8-bit models but slightly more tricky than that. When instantiating a model using `device_map=auto` (EDIT: regardless the `dtype`), the model returns the logits on the same device as the input. I think that this is expected since this is how `accelerate` builds its hooks for each module. So I don"t expect the fix to be done on `accelerate` but more on `transformers` side.
Therefore, if a user calls `generate` or just a simple forward function with a half-precision model that has been instantiated with `device_map=auto` and if the input is initially on the CPU, they may encounter unexpected behaviours such as `top-k-cpu not implemented for Half` since the sampling operations (`top_k`, `top_p`, etc) are done on the same device as the logits - that are in this specific case on CPU.
This PR addresses this fix by forcing the logits to be on the same device as the **first** module of the model (in case the model is sharded across multiple devices, it makes sense to have the `input_ids` to be on the `device` of the first module). The PR also adds a warning message, suggesting the user to explicitly put the `input_ids` on the same device type as the model.
The PR also adds a slow `bnb` test to make sure this situation will not happen in the future!
# How to reproduce the issue?
With this simple snippet you can reproduce the issue:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MAX_NEW_TOKENS = 128
model_name = 'gpt2'
text = """
Q: On average Joe throws 25 punches per minute. A fight lasts 5 rounds of 3 minutes.
How many punches did he throw?\n
A: Letβs think step by step.\n"""
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_ids = tokenizer(text, return_tensors="pt").input_ids
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map='auto',
torch_dtype=torch.float16
)
generated_ids = model.generate(input_ids, max_length=len(input_ids[0])+25, do_sample=True, top_p=0.7)
print(tokenizer.decode(generated_ids[0]))
```
Thanks!
cc @sgugger @ydshieh @Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19468/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19468",
"html_url": "https://github.com/huggingface/transformers/pull/19468",
"diff_url": "https://github.com/huggingface/transformers/pull/19468.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19468.patch",
"merged_at": 1665500329000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19467
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19467/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19467/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19467/events
|
https://github.com/huggingface/transformers/issues/19467
| 1,403,416,486
|
I_kwDOCUB6oc5Tpm-m
| 19,467
|
Redundant normalisation of image and text features in OWL-ViT
|
{
"login": "ekazakos",
"id": 20310086,
"node_id": "MDQ6VXNlcjIwMzEwMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/20310086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekazakos",
"html_url": "https://github.com/ekazakos",
"followers_url": "https://api.github.com/users/ekazakos/followers",
"following_url": "https://api.github.com/users/ekazakos/following{/other_user}",
"gists_url": "https://api.github.com/users/ekazakos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekazakos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekazakos/subscriptions",
"organizations_url": "https://api.github.com/users/ekazakos/orgs",
"repos_url": "https://api.github.com/users/ekazakos/repos",
"events_url": "https://api.github.com/users/ekazakos/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekazakos/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @ekazakos, you're right but the image embeddings are not normalized twice. `OwlViTModel.forward()` is called within `OwlViTForObjectDetectiob.image_text_embedder()` with `return_base_image_embeds=True`. This assures that we can retrieve both the umodified CLIP (OwlViTModel) embeddings and logits using the normalized features and also the unnormalized image embeddings (lines 1085-1087).\r\n\r\nThe reason we do this is OwlViT is trained in two stages: (1) training CLIP / OwlViTModel without any modifications and (2) training the object detection head and fine-tuning the base CLIP model.\r\n\r\nHope this helps!",
"Hi @alaradirik,\r\n\r\nI suspected that this was the reason you did that. And thanks for the clarification, it's helpful! Yet, that implies that the text features are indeed normalised twice, no?",
"> Hi @alaradirik,\r\n> \r\n> I suspected that this was the reason you did that. And thanks for the clarification, it's helpful! Yet, that implies that the text features are indeed normalised twice, no?\r\n\r\nYes, you're right indeed and thanks for pointing it out! I'll be opening a fix PR shortly.",
"Glad I could help! Could you please let me know whether this boosts validation performance at all?\r\n",
"\r\nHey @ekazakos, sorry for the delay! The issue will be fixed with this [PR](https://github.com/huggingface/transformers/pull/19712) but it doesn't affect the performance as double normalization yields the same results.\r\n"
] | 1,665
| 1,666
| 1,666
|
NONE
| null |
### Who can help?
@alaradirik
### Issue description
Hi,
Thank you for the codebase! As the title suggests, I think that in `modeling_owlvit.py` the image and text features are normalised twice while in the original codebase from Google Research they are normalised only once. In particular, in `modeling_owlvit.py` image and text features are normalised both in lines 1073-174 and in lines 1145-1146. On the contrary in the original code, in [https://github.com/google-research/scenic/blob/main/scenic/projects/owl_vit/layers.py](https://github.com/google-research/scenic/blob/main/scenic/projects/owl_vit/layers.py), the features are normalised only in lines 86-89 whereas in line 144 the normalisation parameter is set as `normalize=False` and there is a comment explicitly saying `Don't normalize image and text embeddings:`.
I think this is sensible as there is no reason for double normalisation which normally leads to performance degredation. Please let me know what do you think, and whether I'm wrong as I might be missing something.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19467/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19466
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19466/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19466/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19466/events
|
https://github.com/huggingface/transformers/pull/19466
| 1,403,391,217
|
PR_kwDOCUB6oc5AgXnE
| 19,466
|
Fix doctests for `DeiT` and `TFGroupViT`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
For `DeiT`: The parameter initialization is changed for some models in #19341. This can affect some tests where a checkpoint without head is used (so randomly initialized head). cc @alaradirik
For `TFGroupViT`: We loved PyTorch too much (or not enough?) and want to use it in TensorFlow --> It's still too early :-)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19466/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19466",
"html_url": "https://github.com/huggingface/transformers/pull/19466",
"diff_url": "https://github.com/huggingface/transformers/pull/19466.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19466.patch",
"merged_at": 1665491443000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19465
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19465/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19465/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19465/events
|
https://github.com/huggingface/transformers/pull/19465
| 1,403,354,484
|
PR_kwDOCUB6oc5AgPzx
| 19,465
|
Update PT to TF CLI for audio models
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> This was causing the conversion script to throw exceptions with Whisper, correct?\r\n\r\n@gante Exactly. I modified it locally to push the weights to the hub. This PR is a tidier version of the changes I made i.e. not breaking it for other models. "
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Fixes small issues which prevented converting checkpoints for the whisper model with the pt-to-tf CLI.
* `"raw_speech"` -> `"audio"`: updates the name of the inputs fed to the processor. This reflects the deprecation of `raw_speech` in [the audio processors](https://github.com/huggingface/transformers/blob/331ea019d7053924ee4d9d4d30282a2c74c272a6/src/transformers/models/wav2vec2/processing_wav2vec2.py#L79)
* Takes the feature extractor's default padding strategy if it's not False. Otherwise sets it to True. This was needed as whisper models must be padded to the max sequence length (not to the max sequence in the batch). Whereas other speech models' feature extractors can run with `padding=True` but don't have max length set by default so will fail if `padding="max_length"` e.g. `"facebook/s2t-small-librispeech-asr"`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19465/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19465",
"html_url": "https://github.com/huggingface/transformers/pull/19465",
"diff_url": "https://github.com/huggingface/transformers/pull/19465.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19465.patch",
"merged_at": 1665509129000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19464
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19464/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19464/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19464/events
|
https://github.com/huggingface/transformers/pull/19464
| 1,403,313,992
|
PR_kwDOCUB6oc5AgHOk
| 19,464
|
Update Marian config default vocabulary size
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks like a bugfix to me if the model cannot be instantiated. Looks good to me."
] | 1,665
| 1,665
| 1,665
|
MEMBER
| null |
# What does this PR do?
Fixes #19296
Looking at existing Marian models, their vocabulary size is set to pad token id + 1 ([example](https://huggingface.co/Helsinki-NLP/opus-mt-de-en/blob/main/config.json#L59)). This PR modifies the default vocabulary size such that a) it doesn't throw exceptions (must be > pad token id) and b) preserves this property.
Alternatively, the default for `decoder_start_token_id` and `pad_token_id` can be reduced π
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19464/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19464",
"html_url": "https://github.com/huggingface/transformers/pull/19464",
"diff_url": "https://github.com/huggingface/transformers/pull/19464.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19464.patch",
"merged_at": 1665587712000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19463
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19463/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19463/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19463/events
|
https://github.com/huggingface/transformers/pull/19463
| 1,403,310,378
|
PR_kwDOCUB6oc5AgGbw
| 19,463
|
[WIP] Add MANTa-LM
|
{
"login": "NathanGodey",
"id": 38216711,
"node_id": "MDQ6VXNlcjM4MjE2NzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/38216711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NathanGodey",
"html_url": "https://github.com/NathanGodey",
"followers_url": "https://api.github.com/users/NathanGodey/followers",
"following_url": "https://api.github.com/users/NathanGodey/following{/other_user}",
"gists_url": "https://api.github.com/users/NathanGodey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NathanGodey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NathanGodey/subscriptions",
"organizations_url": "https://api.github.com/users/NathanGodey/orgs",
"repos_url": "https://api.github.com/users/NathanGodey/repos",
"events_url": "https://api.github.com/users/NathanGodey/events{/privacy}",
"received_events_url": "https://api.github.com/users/NathanGodey/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"We're looking forward to it, @NathanGodey!\r\n\r\nI'm pinging @ArthurZucker and @sgugger to make sure it's on their radar even if the implementation isn't ready to review. Let us know when you'd like for us to jump in and help!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,672
| 1,672
|
NONE
| null |
# Implement the MANTa-LM model (upcoming paper)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19463/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19463/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19463",
"html_url": "https://github.com/huggingface/transformers/pull/19463",
"diff_url": "https://github.com/huggingface/transformers/pull/19463.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19463.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19462
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19462/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19462/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19462/events
|
https://github.com/huggingface/transformers/pull/19462
| 1,403,266,255
|
PR_kwDOCUB6oc5Af9LB
| 19,462
|
missing double slash in link
|
{
"login": "MikailINTech",
"id": 45072645,
"node_id": "MDQ6VXNlcjQ1MDcyNjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/45072645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MikailINTech",
"html_url": "https://github.com/MikailINTech",
"followers_url": "https://api.github.com/users/MikailINTech/followers",
"following_url": "https://api.github.com/users/MikailINTech/following{/other_user}",
"gists_url": "https://api.github.com/users/MikailINTech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MikailINTech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MikailINTech/subscriptions",
"organizations_url": "https://api.github.com/users/MikailINTech/orgs",
"repos_url": "https://api.github.com/users/MikailINTech/repos",
"events_url": "https://api.github.com/users/MikailINTech/events{/privacy}",
"received_events_url": "https://api.github.com/users/MikailINTech/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19462). All of your documentation changes will be reflected on that endpoint."
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Correcting the missing double slash in the community's notebook link
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19462/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19462",
"html_url": "https://github.com/huggingface/transformers/pull/19462",
"diff_url": "https://github.com/huggingface/transformers/pull/19462.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19462.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19461
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19461/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19461/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19461/events
|
https://github.com/huggingface/transformers/pull/19461
| 1,403,260,764
|
PR_kwDOCUB6oc5Af8Bg
| 19,461
|
Fix `TFGroupViT` CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @Rocketknight1 \r\n\r\nThere is an test failure `FAILED tests/models/groupvit/test_modeling_tf_groupvit.py::TFGroupViTTextModelTest ::test_saved_model_creation_extended`, see the error below. It happens at\r\n\r\n```python\r\nmodel = tf.keras.models.load_model(saved_model_dir)\r\noutputs = model(class_inputs_dict)\r\n```\r\n\r\nIt seems we provide arguments in `tf.int32`, but the loaded model expects `tf.int64`. I think you know much better this part.\r\nCould you take a look - maybe open another PR if necessary? Thank you.\r\n\r\n**Full Error**\r\n```\r\n2022-10-09T08:01:03.3261432Z __________ TFGroupViTTextModelTest.test_saved_model_creation_extended __________\r\n2022-10-09T08:01:03.3261667Z \r\n2022-10-09T08:01:03.3261940Z self = <tests.models.groupvit.test_modeling_tf_groupvit.TFGroupViTTextModelTest testMethod=test_saved_model_creation_extended>\r\n2022-10-09T08:01:03.3262235Z \r\n2022-10-09T08:01:03.3262337Z @slow\r\n2022-10-09T08:01:03.3262575Z def test_saved_model_creation_extended(self):\r\n2022-10-09T08:01:03.3262979Z config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n2022-10-09T08:01:03.3263352Z config.output_hidden_states = True\r\n2022-10-09T08:01:03.3263664Z config.output_attentions = True\r\n2022-10-09T08:01:03.3263941Z \r\n2022-10-09T08:01:03.3264165Z if hasattr(config, \"use_cache\"):\r\n2022-10-09T08:01:03.3264499Z config.use_cache = True\r\n2022-10-09T08:01:03.3264758Z \r\n2022-10-09T08:01:03.3264989Z for model_class in self.all_model_classes:\r\n2022-10-09T08:01:03.3265379Z class_inputs_dict = self._prepare_for_class(inputs_dict, model_class)\r\n2022-10-09T08:01:03.3392097Z model = model_class(config)\r\n2022-10-09T08:01:03.3392497Z num_out = len(model(class_inputs_dict))\r\n2022-10-09T08:01:03.3392721Z \r\n2022-10-09T08:01:03.3392989Z with tempfile.TemporaryDirectory() as tmpdirname:\r\n2022-10-09T08:01:03.3393340Z model.save_pretrained(tmpdirname, saved_model=True)\r\n2022-10-09T08:01:03.3393690Z saved_model_dir = os.path.join(tmpdirname, \"saved_model\", \"1\")\r\n2022-10-09T08:01:03.3394008Z model = tf.keras.models.load_model(saved_model_dir)\r\n2022-10-09T08:01:03.3394308Z > outputs = model(class_inputs_dict)\r\n2022-10-09T08:01:03.3394467Z \r\n2022-10-09T08:01:03.3394617Z tests/models/groupvit/test_modeling_tf_groupvit.py:480: \r\n2022-10-09T08:01:03.3394911Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n2022-10-09T08:01:03.3395433Z /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:70: in error_handler\r\n2022-10-09T08:01:03.3395756Z raise e.with_traceback(filtered_tb) from None\r\n2022-10-09T08:01:03.3396038Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n2022-10-09T08:01:03.3396187Z \r\n2022-10-09T08:01:03.3396605Z args = ({'attention_mask': <tf.Tensor 'input_ids:0' shape=(12, 7) dtype=int32>, 'input_ids': <tf.Tensor 'input_ids_1:0' shape=(12, 7) dtype=int32>}, None, None, None, None, None, ...)\r\n2022-10-09T08:01:03.3396964Z kwargs = {}\r\n2022-10-09T08:01:03.3397499Z inputs = (({'attention_mask': <tf.Tensor 'input_ids:0' shape=(12, 7) dtype=int32>, 'input_ids': <tf.Tensor 'input_ids_1:0' shape=(12, 7) dtype=int32>}, None, None, None, None, None, ...), {})\r\n2022-10-09T08:01:03.3397859Z allow_conversion = True\r\n2022-10-09T08:01:03.3398483Z function_name = '__inference_tf_group_vi_t_text_model_36_layer_call_and_return_conditional_losses_370446'\r\n2022-10-09T08:01:03.3399135Z function = <ConcreteFunction tf_group_vi_t_text_model_36_layer_call_and_return_conditional_losses(input_ids, attention_mask=None, position_ids=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=True) at 0x7F8CC81D4D90>\r\n2022-10-09T08:01:03.3400102Z signature_descriptions = [\"Option 1:\\n Positional arguments (7 total):\\n * {'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64...put_ids/input_ids')}\\n * None\\n * None\\n * None\\n * None\\n * None\\n * True\\n Keyword arguments: {}\"]\r\n2022-10-09T08:01:03.3400797Z _pretty_format_positional = <function recreate_function.<locals>.restored_function_body.<locals>._pretty_format_positional at 0x7f8d003cb700>\r\n2022-10-09T08:01:03.3401116Z index = 3\r\n2022-10-09T08:01:03.3401601Z concrete_function = <ConcreteFunction tf_group_vi_t_text_model_36_layer_call_and_return_conditional_losses(input_ids, attention_mask=None, position_ids=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=True) at 0x7F8CC81D4D90>\r\n2022-10-09T08:01:03.3402494Z positional = ({'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name='attention_mask'), 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/input_ids')}, None, None, None, None, None, ...)\r\n2022-10-09T08:01:03.3402916Z keyword = {}\r\n2022-10-09T08:01:03.3403035Z \r\n2022-10-09T08:01:03.3403161Z def restored_function_body(*args, **kwargs):\r\n2022-10-09T08:01:03.3403636Z \"\"\"Calls a restored function or raises an error if no matching function.\"\"\"\r\n2022-10-09T08:01:03.3403949Z if not saved_function.concrete_functions:\r\n2022-10-09T08:01:03.3404275Z raise ValueError(\"Found zero restored functions for caller function.\")\r\n2022-10-09T08:01:03.3404642Z # This is the format of function.graph.structured_input_signature. At this\r\n2022-10-09T08:01:03.3405012Z # point, the args and kwargs have already been canonicalized.\r\n2022-10-09T08:01:03.3405274Z inputs = (args, kwargs)\r\n2022-10-09T08:01:03.3405478Z \r\n2022-10-09T08:01:03.3405739Z # First try to find a concrete function that can be called without input\r\n2022-10-09T08:01:03.3406101Z # conversions. This allows one to pick a more specific trace in case there\r\n2022-10-09T08:01:03.3406417Z # was also a more expensive one that supported tensors.\r\n2022-10-09T08:01:03.3406708Z for allow_conversion in [False, True]:\r\n2022-10-09T08:01:03.3407008Z for function_name in saved_function.concrete_functions:\r\n2022-10-09T08:01:03.3407321Z function = concrete_functions[function_name]\r\n2022-10-09T08:01:03.3407622Z if any([inp is None for inp in function.captured_inputs]):\r\n2022-10-09T08:01:03.3407955Z raise ValueError(\"Looks like you are trying to run a loaded \"\r\n2022-10-09T08:01:03.3408359Z \"non-Keras model that was trained using \"\r\n2022-10-09T08:01:03.3408774Z \"tf.distribute.experimental.ParameterServerStrategy \"\r\n2022-10-09T08:01:03.3409160Z \"with variable partitioning, which is not currently \"\r\n2022-10-09T08:01:03.3409664Z \"supported. Try using Keras to define your model \"\r\n2022-10-09T08:01:03.3409965Z \"if possible.\")\r\n2022-10-09T08:01:03.3410294Z if _concrete_function_callable_with(function, inputs, allow_conversion):\r\n2022-10-09T08:01:03.3410757Z return _call_concrete_function(function, inputs)\r\n2022-10-09T08:01:03.3410980Z \r\n2022-10-09T08:01:03.3411193Z signature_descriptions = []\r\n2022-10-09T08:01:03.3411414Z \r\n2022-10-09T08:01:03.3411651Z def _pretty_format_positional(positional):\r\n2022-10-09T08:01:03.3411939Z return \"Positional arguments ({} total):\\n * {}\".format(\r\n2022-10-09T08:01:03.3412206Z len(positional),\r\n2022-10-09T08:01:03.3412486Z \"\\n * \".join(pprint.pformat(a) for a in positional))\r\n2022-10-09T08:01:03.3412735Z \r\n2022-10-09T08:01:03.3413063Z for index, function_name in enumerate(saved_function.concrete_functions):\r\n2022-10-09T08:01:03.3413387Z concrete_function = concrete_functions[function_name]\r\n2022-10-09T08:01:03.3413721Z positional, keyword = concrete_function.structured_input_signature\r\n2022-10-09T08:01:03.3414020Z signature_descriptions.append(\r\n2022-10-09T08:01:03.3414290Z \"Option {}:\\n {}\\n Keyword arguments: {}\".format(\r\n2022-10-09T08:01:03.3414622Z index + 1, _pretty_format_positional(positional), keyword))\r\n2022-10-09T08:01:03.3414929Z > raise ValueError(\r\n2022-10-09T08:01:03.3415383Z \"Could not find matching concrete function to call loaded from the \"\r\n2022-10-09T08:01:03.3415807Z f\"SavedModel. Got:\\n {_pretty_format_positional(args)}\\n Keyword \"\r\n2022-10-09T08:01:03.3416205Z f\"arguments: {kwargs}\\n\\n Expected these arguments to match one of the \"\r\n2022-10-09T08:01:03.3416594Z f\"following {len(saved_function.concrete_functions)} option(s):\\n\\n\"\r\n2022-10-09T08:01:03.3416955Z f\"{(chr(10)+chr(10)).join(signature_descriptions)}\")\r\n2022-10-09T08:01:03.3417393Z E ValueError: Exception encountered when calling layer \"tf_group_vi_t_text_model_36\" \" f\"(type TFGroupViTTextModel).\r\n2022-10-09T08:01:03.3417731Z E \r\n2022-10-09T08:01:03.3418041Z E Could not find matching concrete function to call loaded from the SavedModel. Got:\r\n2022-10-09T08:01:03.3418385Z E Positional arguments (7 total):\r\n2022-10-09T08:01:03.3418885Z E * {'attention_mask': <tf.Tensor 'input_ids:0' shape=(12, 7) dtype=int32>,\r\n2022-10-09T08:01:03.3419334Z E 'input_ids': <tf.Tensor 'input_ids_1:0' shape=(12, 7) dtype=int32>}\r\n2022-10-09T08:01:03.3419619Z E * None\r\n2022-10-09T08:01:03.3419837Z E * None\r\n2022-10-09T08:01:03.3420048Z E * None\r\n2022-10-09T08:01:03.3420245Z E * None\r\n2022-10-09T08:01:03.3420455Z E * None\r\n2022-10-09T08:01:03.3420668Z E * False\r\n2022-10-09T08:01:03.3420890Z E Keyword arguments: {}\r\n2022-10-09T08:01:03.3421125Z E \r\n2022-10-09T08:01:03.3421420Z E Expected these arguments to match one of the following 4 option(s):\r\n2022-10-09T08:01:03.3421707Z E \r\n2022-10-09T08:01:03.3421894Z E Option 1:\r\n2022-10-09T08:01:03.3422147Z E Positional arguments (7 total):\r\n2022-10-09T08:01:03.3422648Z E * {'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/attention_mask'),\r\n2022-10-09T08:01:03.3423198Z E 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/input_ids')}\r\n2022-10-09T08:01:03.3423494Z E * None\r\n2022-10-09T08:01:03.3423709Z E * None\r\n2022-10-09T08:01:03.3423922Z E * None\r\n2022-10-09T08:01:03.3424136Z E * None\r\n2022-10-09T08:01:03.3424405Z E * None\r\n2022-10-09T08:01:03.3424613Z E * False\r\n2022-10-09T08:01:03.3424850Z E Keyword arguments: {}\r\n2022-10-09T08:01:03.3425065Z E \r\n2022-10-09T08:01:03.3425242Z E Option 2:\r\n2022-10-09T08:01:03.3425478Z E Positional arguments (7 total):\r\n2022-10-09T08:01:03.3425975Z E * {'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/attention_mask'),\r\n2022-10-09T08:01:03.3426635Z E 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/input_ids')}\r\n2022-10-09T08:01:03.3426902Z E * None\r\n2022-10-09T08:01:03.3427098Z E * None\r\n2022-10-09T08:01:03.3427397Z E * None\r\n2022-10-09T08:01:03.3427576Z E * None\r\n2022-10-09T08:01:03.3427742Z E * None\r\n2022-10-09T08:01:03.3427922Z E * True\r\n2022-10-09T08:01:03.3428119Z E Keyword arguments: {}\r\n2022-10-09T08:01:03.3428303Z E \r\n2022-10-09T08:01:03.3428476Z E Option 3:\r\n2022-10-09T08:01:03.3428729Z E Positional arguments (7 total):\r\n2022-10-09T08:01:03.3429129Z E * {'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name='attention_mask'),\r\n2022-10-09T08:01:03.3429670Z E 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/input_ids')}\r\n2022-10-09T08:01:03.3429926Z E * None\r\n2022-10-09T08:01:03.3430109Z E * None\r\n2022-10-09T08:01:03.3430288Z E * None\r\n2022-10-09T08:01:03.3430450Z E * None\r\n2022-10-09T08:01:03.3430633Z E * None\r\n2022-10-09T08:01:03.3430814Z E * False\r\n2022-10-09T08:01:03.3431017Z E Keyword arguments: {}\r\n2022-10-09T08:01:03.3431193Z E \r\n2022-10-09T08:01:03.3431371Z E Option 4:\r\n2022-10-09T08:01:03.3431585Z E Positional arguments (7 total):\r\n2022-10-09T08:01:03.3431991Z E * {'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name='attention_mask'),\r\n2022-10-09T08:01:03.3432620Z E 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/input_ids')}\r\n2022-10-09T08:01:03.3432891Z E * None\r\n2022-10-09T08:01:03.3433073Z E * None\r\n2022-10-09T08:01:03.3433259Z E * None\r\n2022-10-09T08:01:03.3433417Z E * None\r\n2022-10-09T08:01:03.3433589Z E * None\r\n2022-10-09T08:01:03.3433769Z E * True\r\n2022-10-09T08:01:03.3433955Z E Keyword arguments: {}\r\n2022-10-09T08:01:03.3434128Z E \r\n2022-10-09T08:01:03.3434408Z E Call arguments received by layer \"tf_group_vi_t_text_model_36\" \" f\"(type TFGroupViTTextModel):\r\n2022-10-09T08:01:03.3434960Z E β’ args=({'input_ids': 'tf.Tensor(shape=(12, 7), dtype=int32)', 'attention_mask': 'tf.Tensor(shape=(12, 7), dtype=int32)'},)\r\n2022-10-09T08:01:03.3435308Z E β’ kwargs={'training': 'False'}\r\n```"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Fix 3 `TFGroupViT` CI failures.
There is a remaining one `FAILED tests/models/groupvit/test_modeling_tf_groupvit.py::TFGroupViTTextModelTest ::test_saved_model_creation_extended` which I think @Rocketknight1 will know better.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19461/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19461",
"html_url": "https://github.com/huggingface/transformers/pull/19461",
"diff_url": "https://github.com/huggingface/transformers/pull/19461.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19461.patch",
"merged_at": 1665491356000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19460
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19460/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19460/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19460/events
|
https://github.com/huggingface/transformers/pull/19460
| 1,403,231,033
|
PR_kwDOCUB6oc5Af1uT
| 19,460
|
TF: TFBart embedding initialization
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
MEMBER
| null |
# What does this PR do?
### Context
We were initializing the embeddings as `TFSharedEmbeddings(config.vocab_size, config.d_model, config.pad_token_id, name="model.shared")`. Notice the 3rd argument, the pad token id, which is the 3rd argument in `nn.Embedding`. However, for `TFSharedEmbeddings`, [the 3rd argument is the initializer range](https://github.com/huggingface/transformers/blob/4c962d5e790d06c142af35aad165c74c0bcf861a/src/transformers/modeling_tf_utils.py#L2837). This means that some models, like TFMarian, were initializing the embeddings with very [large values](https://github.com/huggingface/transformers/blob/4c962d5e790d06c142af35aad165c74c0bcf861a/src/transformers/models/marian/configuration_marian.py#L135) (stddev=58100).
### Changes
This PR correctly sets the embedding initialization according to the configuration parameter related to weight initialization range. It includes proper weight initialization when the embeddings are resized.
This PR will be used as a reference for embedding weight initialization, regarding the embedding update that is happening in the codebase at the moment.
### Discussion for the future
PT sets the weights in a top-down fashion, with `_init_weights` conveniently fetching information from the `config` and then initializing the weights. On TF, weight initialization is defined in a bottom-up fashion. This means that if we want to replicate the PT initialization for all TF weights, we need to pass the `config` all the way down to the individual layers (= verbose and needs manual changes in many places for all models) π
Alternativelly, we can replicate the `_init_weights` logic in TF, and manually set the weights after initializing the model. IMO that would be much cleaner, despite not being the way Keras expects weights to be set.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19460/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19460",
"html_url": "https://github.com/huggingface/transformers/pull/19460",
"diff_url": "https://github.com/huggingface/transformers/pull/19460.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19460.patch",
"merged_at": 1665495886000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19459
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19459/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19459/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19459/events
|
https://github.com/huggingface/transformers/pull/19459
| 1,403,197,227
|
PR_kwDOCUB6oc5AfunH
| 19,459
|
corrected not woking link.
|
{
"login": "YOGENDERSS",
"id": 81403270,
"node_id": "MDQ6VXNlcjgxNDAzMjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/81403270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YOGENDERSS",
"html_url": "https://github.com/YOGENDERSS",
"followers_url": "https://api.github.com/users/YOGENDERSS/followers",
"following_url": "https://api.github.com/users/YOGENDERSS/following{/other_user}",
"gists_url": "https://api.github.com/users/YOGENDERSS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YOGENDERSS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YOGENDERSS/subscriptions",
"organizations_url": "https://api.github.com/users/YOGENDERSS/orgs",
"repos_url": "https://api.github.com/users/YOGENDERSS/repos",
"events_url": "https://api.github.com/users/YOGENDERSS/events{/privacy}",
"received_events_url": "https://api.github.com/users/YOGENDERSS/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger sir could you review this up. ",
"@julien-c if u could check this up then please check it once.\r\n",
"@amyeroberts mam if u r free then please u check. or anyone who is free please check.",
"Hi @YOGENDERSS , I think this PR is a duplicate from mine https://github.com/huggingface/transformers/pull/19434",
"_The documentation is not available anymore as the PR was closed or merged._",
"sorry i did'nt knew it was open already.",
"@MikailINTech thanks buddy",
"@YOGENDERSS No worries "
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19459/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19459",
"html_url": "https://github.com/huggingface/transformers/pull/19459",
"diff_url": "https://github.com/huggingface/transformers/pull/19459.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19459.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19458
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19458/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19458/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19458/events
|
https://github.com/huggingface/transformers/pull/19458
| 1,403,168,171
|
PR_kwDOCUB6oc5AfoYF
| 19,458
|
fix warnings in deberta
|
{
"login": "sanderland",
"id": 48946947,
"node_id": "MDQ6VXNlcjQ4OTQ2OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanderland",
"html_url": "https://github.com/sanderland",
"followers_url": "https://api.github.com/users/sanderland/followers",
"following_url": "https://api.github.com/users/sanderland/following{/other_user}",
"gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanderland/subscriptions",
"organizations_url": "https://api.github.com/users/sanderland/orgs",
"repos_url": "https://api.github.com/users/sanderland/repos",
"events_url": "https://api.github.com/users/sanderland/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanderland/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@LysandreJik any thoughts?"
] | 1,665
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
In recent torch versions, the following warning is thrown on the deberta code:
```
UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
p2c_att = torch.matmul(key_layer, torch.tensor(pos_query_layer.transpose(-1, -2), dtype=key_layer.dtype))
```
This PR fixes the warnings by using .to(dtype=) rather than tensor(...,dtype=)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19458/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19458",
"html_url": "https://github.com/huggingface/transformers/pull/19458",
"diff_url": "https://github.com/huggingface/transformers/pull/19458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19458.patch",
"merged_at": 1666016102000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19457
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19457/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19457/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19457/events
|
https://github.com/huggingface/transformers/pull/19457
| 1,403,166,768
|
PR_kwDOCUB6oc5AfoEu
| 19,457
|
Add docstrings for canine model
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ydshieh ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Ping @ydshieh ",
"@raghavanone I update your PR to make it work for `CanineForTokenClassification`.\r\n\r\nThank you for your work!",
"_The documentation is not available anymore as the PR was closed or merged._",
"pong @sgugger !"
] | 1,665
| 1,671
| 1,668
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19457/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19457",
"html_url": "https://github.com/huggingface/transformers/pull/19457",
"diff_url": "https://github.com/huggingface/transformers/pull/19457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19457.patch",
"merged_at": 1668696071000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19456
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19456/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19456/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19456/events
|
https://github.com/huggingface/transformers/issues/19456
| 1,403,138,274
|
I_kwDOCUB6oc5TojDi
| 19,456
|
Original image from Trocr Processor
|
{
"login": "dranreb1660",
"id": 25390001,
"node_id": "MDQ6VXNlcjI1MzkwMDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/25390001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dranreb1660",
"html_url": "https://github.com/dranreb1660",
"followers_url": "https://api.github.com/users/dranreb1660/followers",
"following_url": "https://api.github.com/users/dranreb1660/following{/other_user}",
"gists_url": "https://api.github.com/users/dranreb1660/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dranreb1660/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dranreb1660/subscriptions",
"organizations_url": "https://api.github.com/users/dranreb1660/orgs",
"repos_url": "https://api.github.com/users/dranreb1660/repos",
"events_url": "https://api.github.com/users/dranreb1660/events{/privacy}",
"received_events_url": "https://api.github.com/users/dranreb1660/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThis question is answered on our forum: https://discuss.huggingface.co/t/get-original-image-from-trocr-processor/24224/2"
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
Hi [@NielsRogge, I am following your TrOCR finetuning with PyTorch but I have two questions: The processor resizes the image from 1700 x 134 to 384 x384, 1) is there a way to maintain the height of the original image or even use a custom dimension for training, eg. 512 x 134. and 2) is there a way to get the original image back for logging purposes as the processed image is unrecognizable after those basic augmentations. Thanks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19456/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19455
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19455/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19455/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19455/events
|
https://github.com/huggingface/transformers/pull/19455
| 1,403,133,536
|
PR_kwDOCUB6oc5Afg9e
| 19,455
|
Extend `nested_XXX` functions to mappings/dicts.
|
{
"login": "Guillem96",
"id": 21279306,
"node_id": "MDQ6VXNlcjIxMjc5MzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/21279306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guillem96",
"html_url": "https://github.com/Guillem96",
"followers_url": "https://api.github.com/users/Guillem96/followers",
"following_url": "https://api.github.com/users/Guillem96/following{/other_user}",
"gists_url": "https://api.github.com/users/Guillem96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guillem96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guillem96/subscriptions",
"organizations_url": "https://api.github.com/users/Guillem96/orgs",
"repos_url": "https://api.github.com/users/Guillem96/repos",
"events_url": "https://api.github.com/users/Guillem96/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guillem96/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for your feedback @sgugger . \r\nSuggestions and style applied!",
"Thanks a lot!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Extended `nested_XXX` trainer pt utility functions to work with mappings (dict, OrderedDict, etc.)
Some classes that are modeling the models' outputs, inherit from `ModelOutput` which at turn is an `OrderedDict`. Currently, when applying these `nested_XXX` functions to these models ouputs the code fails with an error.
Extending this nested utilities to work with dictionaries fix this issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19455/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19455",
"html_url": "https://github.com/huggingface/transformers/pull/19455",
"diff_url": "https://github.com/huggingface/transformers/pull/19455.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19455.patch",
"merged_at": 1665490402000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19454
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19454/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19454/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19454/events
|
https://github.com/huggingface/transformers/pull/19454
| 1,403,108,123
|
PR_kwDOCUB6oc5Afbjy
| 19,454
|
Fix TF batch norm momentum and epsilon values
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Updates momentum values for TF batch norm layers to match the pytorch models'.
The momentum value for PyTorch and TensorFlow batch normalization layers is not equivalent, as pointed out by @mathieujouffroy [here](https://github.com/huggingface/transformers/pull/18597#issuecomment-1263381794)
The TensorFlow value should be (1 - pytorch_momentum) in order to ensure the correct updates are applied to the running mean and running variance calculations. We wouldn't observe a difference loading a pretrained model and performing inference, but evaluation outputs would change after some training steps.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19454/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19454",
"html_url": "https://github.com/huggingface/transformers/pull/19454",
"diff_url": "https://github.com/huggingface/transformers/pull/19454.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19454.patch",
"merged_at": 1665411461000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19453
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19453/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19453/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19453/events
|
https://github.com/huggingface/transformers/pull/19453
| 1,403,087,523
|
PR_kwDOCUB6oc5AfXJg
| 19,453
|
Added support for multivariate independent emission heads
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"indeed! Somehow i do not get this failing test locally... any idea what could be wrong?",
"thank you!\r\n"
] | 1,665
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
This adds support for multivariate independent emission heads to the time series transformer model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19453/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19453",
"html_url": "https://github.com/huggingface/transformers/pull/19453",
"diff_url": "https://github.com/huggingface/transformers/pull/19453.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19453.patch",
"merged_at": 1666355530000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19452
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19452/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19452/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19452/events
|
https://github.com/huggingface/transformers/issues/19452
| 1,403,083,947
|
I_kwDOCUB6oc5ToVyr
| 19,452
|
Different behaviour of AutoTokenizer and T5Tokenizer
|
{
"login": "pietrolesci",
"id": 61748653,
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pietrolesci",
"html_url": "https://github.com/pietrolesci",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"When you're using `T5Tokenizer`, you're loading the slow version. When using `AutoTokenizer`, you're loading the fast version by default. \r\n\r\nUnfortunately, due to a difference in implementation, the fast and slow tokenizer can have some differences.\r\n\r\nPinging @SaulLu and @ArthurZucker for knowledge",
"Yes, noticed the same thing with GPT2, especially with OOV that are automatically converted to blank with the fast version! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.4.209-129.367.amzn2int.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The `T5Tokenizer` prepends a whitespace before the eos token when a new eos token is provided, while AutoTokenizer maintains the usual behaviour.
```python
from transformers import AutoTokenizer, T5Tokenizer
text = ["My name is Pietro", "I love pizza"]
tok = T5Tokenizer.from_pretrained("t5-small", bos_token="[bos]", eos_token="[eos]", sep_token="[sep]")
auto_tok = AutoTokenizer.from_pretrained("t5-small", bos_token="[bos]", eos_token="[eos]", sep_token="[sep]")
print(tok.batch_decode(tok(text)["input_ids"]))
print(auto_tok.batch_decode(tok(text)["input_ids"]))
#> ['My name is Pietro [eos]', 'I love pizza [eos]']
#> ['My name is Pietro[eos]', 'I love pizza[eos]']
tok = T5Tokenizer.from_pretrained("t5-small")
auto_tok = AutoTokenizer.from_pretrained("t5-small")
print(tok.batch_decode(tok(text)["input_ids"]))
print(auto_tok.batch_decode(tok(text)["input_ids"]))
#> ['My name is Pietro</s>', 'I love pizza</s>']
#> ['My name is Pietro</s>', 'I love pizza</s>']
```
### Expected behavior
The two tokenizer classes should be equivalent
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19452/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19451
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19451/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19451/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19451/events
|
https://github.com/huggingface/transformers/pull/19451
| 1,403,046,099
|
PR_kwDOCUB6oc5AfOVC
| 19,451
|
Fix repo names for ESM tests
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
MEMBER
| null |
This should cause ESM tests to stop erroring out all the time! Cc @amyeroberts @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19451/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19451",
"html_url": "https://github.com/huggingface/transformers/pull/19451",
"diff_url": "https://github.com/huggingface/transformers/pull/19451.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19451.patch",
"merged_at": 1665404400000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19450
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19450/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19450/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19450/events
|
https://github.com/huggingface/transformers/pull/19450
| 1,402,938,450
|
PR_kwDOCUB6oc5Ae3b5
| 19,450
|
Add LiLT
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds LiLT, a simple way to extend LayoutLM to any language that has a pre-trained RoBERTa checkpoint.
To do:
- [x] setup new organization, transfer checkpoints
- [x] make tests faster
- [x] remove is_decoder logic
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19450/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19450/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19450",
"html_url": "https://github.com/huggingface/transformers/pull/19450",
"diff_url": "https://github.com/huggingface/transformers/pull/19450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19450.patch",
"merged_at": 1665562281000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19449
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19449/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19449/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19449/events
|
https://github.com/huggingface/transformers/pull/19449
| 1,402,890,728
|
PR_kwDOCUB6oc5AetQE
| 19,449
|
[WIP] Fix weights initialization of several vision models
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19449). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Any further updates? @NielsRogge @amyeroberts "
] | 1,665
| 1,696
| null |
CONTRIBUTOR
| null |
# What does this PR do?
This PR is a follow-up of #19341, to make sure weights are properly initialized when training vision models from scratch.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19449/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19449",
"html_url": "https://github.com/huggingface/transformers/pull/19449",
"diff_url": "https://github.com/huggingface/transformers/pull/19449.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19449.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19448
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19448/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19448/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19448/events
|
https://github.com/huggingface/transformers/issues/19448
| 1,402,878,860
|
I_kwDOCUB6oc5TnjuM
| 19,448
|
OWL-ViT
|
{
"login": "FrancescoSaverioZuppichini",
"id": 15908060,
"node_id": "MDQ6VXNlcjE1OTA4MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/15908060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancescoSaverioZuppichini",
"html_url": "https://github.com/FrancescoSaverioZuppichini",
"followers_url": "https://api.github.com/users/FrancescoSaverioZuppichini/followers",
"following_url": "https://api.github.com/users/FrancescoSaverioZuppichini/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancescoSaverioZuppichini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrancescoSaverioZuppichini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancescoSaverioZuppichini/subscriptions",
"organizations_url": "https://api.github.com/users/FrancescoSaverioZuppichini/orgs",
"repos_url": "https://api.github.com/users/FrancescoSaverioZuppichini/repos",
"events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi Francesco, this feature was already asked and a PR to add this feature can be found here: #18891 ",
"nice, thanks @NielsRogge "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
### Feature request
Dear all,
It looks like [OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit) doesn't support image conditioned detection.
### Motivation
Image conditioned detection is the most appealing feature of this model
### Your contribution
No, just pointing this out. Thanks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19448/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19447
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19447/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19447/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19447/events
|
https://github.com/huggingface/transformers/issues/19447
| 1,402,709,733
|
I_kwDOCUB6oc5Tm6bl
| 19,447
|
T5ForConditionalGeneration output differently with the same batch input
|
{
"login": "CaffreyR",
"id": 84232793,
"node_id": "MDQ6VXNlcjg0MjMyNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaffreyR",
"html_url": "https://github.com/CaffreyR",
"followers_url": "https://api.github.com/users/CaffreyR/followers",
"following_url": "https://api.github.com/users/CaffreyR/following{/other_user}",
"gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions",
"organizations_url": "https://api.github.com/users/CaffreyR/orgs",
"repos_url": "https://api.github.com/users/CaffreyR/repos",
"events_url": "https://api.github.com/users/CaffreyR/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaffreyR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @CaffreyR π \r\n\r\nCan you share a snippet where I can fully reproduce the issue locally? Also -- am I right in saying that the issue is that the exact same input might result in different outputs, using `model.generate()`?",
"Hi @gante , thanks for your kind reply. \r\nSorry but the full code has not been released. It is actually modified from the code of facebook [FID](https://github.com/facebookresearch/FiD). I count the time in the evaluation\r\nhttps://github.com/facebookresearch/FiD/blob/main/test_reader.py#L36\r\n\r\nYou can add the code in the `for` cycle, \r\n```\r\nfor i, batch in enumerate(dataloader):\r\n (idx, _, _, context_ids, context_mask) = batch\r\n torch.cuda.synchronize()\r\n import time\r\n start = time.perf_counter()\r\n if opt.write_crossattention_scores:\r\n model.reset_score_storage()\r\n\r\n outputs = model.generate(\r\n input_ids=context_ids.cuda(),\r\n attention_mask=context_mask.cuda(),\r\n max_length=50,\r\n )\r\n\r\n if opt.write_crossattention_scores:\r\n crossattention_scores = model.get_crossattention_scores(context_mask.cuda())\r\n\r\n for k, o in enumerate(outputs):\r\n ans = tokenizer.decode(o, skip_special_tokens=True)\r\n example = dataset.data[idx[k]]\r\n if 'answers' in example:\r\n score = src.evaluation.ems(ans, example['answers'])\r\n exactmatch.append(score)\r\n\r\n if opt.write_results:\r\n fw.write(str(example['id']) + \"\\t\" + ans + '\\n')\r\n if opt.write_crossattention_scores:\r\n for j in range(context_ids.size(1)):\r\n example['ctxs'][j]['score'] = crossattention_scores[k, j].item()\r\n\r\n total += 1\r\n if (i + 1) % opt.eval_print_freq == 0:\r\n log = f'Process rank:{opt.global_rank}, {i+1} / {len(dataloader)}'\r\n if len(exactmatch) == 0:\r\n log += '| no answer to compute scores'\r\n else:\r\n log += f' | average = {np.mean(exactmatch):.3f}'\r\n logger.warning(log)\r\n torch.cuda.synchronize()\r\n end = time.perf_counter()\r\n print(end-start)\r\n\r\nlogger.warning(f'Process rank:{opt.global_rank}, total {total} | average = {np.mean(exactmatch):.3f}')\r\nif opt.is_distributed:\r\n torch.distributed.barrier()\r\nscore, total = src.util.weighted_average(np.mean(exactmatch), total, opt)\r\n\r\nreturn score, total\r\n\r\n```\r\n\r\n\r\nAnd the `outputs` , there actually 3 `outputs`\r\n\r\n- [The outputs of model generate](https://github.com/facebookresearch/FiD/blob/main/test_reader.py#L42)\r\n- The number of `forward` in some layers.\r\n- The time, (Some are 1.5 times the size of another)\r\n\r\n\r\nI think the difference in `The outputs of model generate` can be fixed by loading the same weight of model, but the second and third are different. \r\n\r\n\r\nMany thanks again for your time\r\n\r\n\r\nBest,\r\nCaffreyR",
"@CaffreyR without an exact script, I am limited in what I can do :) I understand your limitations, but the problem you are describing can come from many places.\r\n\r\nIn essence, `generate()` can have variable outputs (which leads to different execution times) for the same input in two circumstances:\r\n1. `generate()` is configured to not be deterministic. If `transformers` `generate()` is being used without modifications, this should only be possible with the `do_sample=True` argument.\r\n2. the model is not the same between `generate()` calls. \r\n",
"Hi @gante , thanks again for your reply. It actually do not modify, see [here](https://github.com/facebookresearch/FiD/blob/main/src/model.py#L51), it actually just use the `generate()` from `transformers.T5ForConditionalGeneration`\r\n\r\nAnd what is the `do_sample=True` ? Because in the code here.\r\nhttps://github.com/facebookresearch/FiD/blob/main/test_reader.py#L115\r\n\r\nThere is a sampler , does it match the `circumstance 1`, could you please explain it more? Thanks\r\n\r\n```\r\nfrom torch.utils.data import DataLoader, SequentialSampler\r\neval_examples = src.data.load_data(\r\n opt.eval_data, \r\n global_rank=opt.global_rank, #use the global rank and world size attibutes to split the eval set on multiple gpus\r\n world_size=opt.world_size\r\n )\r\n eval_dataset = src.data.Dataset(\r\n eval_examples, \r\n opt.n_context, \r\n )\r\n\r\n eval_sampler = SequentialSampler(eval_dataset) \r\n eval_dataloader = DataLoader(\r\n eval_dataset, \r\n sampler=eval_sampler, \r\n batch_size=opt.per_gpu_batch_size,\r\n num_workers=20, \r\n collate_fn=collator_function\r\n )\r\n```\r\n",
"It shouldn't be related, `SequentialSampler` only touches the data, not the `generate()` method.\r\n\r\nAs for an explanation of `do_sample`, you can refer to our [docs](https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate) or our [blog post](https://huggingface.co/blog/how-to-generate).\r\n\r\nPlease note that without a full reproduction script I won't give further support here. As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository (with clear reproducibility) and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) π€",
"HI @CaffreyR ,\r\n\r\nMaybe it's because of the `t5-base` configuration ? https://huggingface.co/t5-base/blob/main/config.json#L21\r\nThese lines modify the default options of `generate` for this model.",
"Hi @Narsil , thanks for your kind reply. Do you mean `task_specific_params`? Could you please explain more? What is the default option and how does it modify them ? Thanks!",
"the pipeline reads `task_specific_params` and overrides the default when it's present.\r\n\r\nWe realized this wasn't super discoverable, so very few models have this feature being used, but I happen to remember this one does.\r\n\r\nSo if you're using `t5-base` as a `summarization` pipeline (which I think is the default) then the pipeline will use those defaults and treat them as regular params, it happens these control the `generate_kwargs` of `generate`.\r\nSometimes models also have defaults in the `config` (same idea just it's for the whole model and does not depend on the actual task).\r\nNeither of these mechanism is really great at showing to users what happens but it's great to try and provide sane defaults (or the ones used in the original repo/ original paper).\r\n\r\nIf you want to override any you just need to supply yours directly to `generate` for instance.\r\n\r\n`User specified > Config > Default` is the order of resolution (`pipeline` has a few more rules, but you're not using them in fact).\r\n",
"Hi @Narsil , thanks for your explanation. So what should I do, the code here just use t5.config to them. I need to delete `task_specific_params` in this case?",
"@CaffreyR \r\n\r\nYou can:\r\n- Override the params directly in the pipeline\r\n\r\n```python\r\npipeline(model=\"t5-base\", \r\n**{\r\n \"early_stopping\": true,\r\n \"length_penalty\": None,\r\n \"max_length\": None,\r\n \"min_length\": None,\r\n \"no_repeat_ngram_size\": None,\r\n \"num_beams\": None,\r\n \"prefix\": \"summarize: \" # You probably want to keep this for summarization as it's how the model was trained\r\n })\r\n ```\r\n \r\n Or deactivate them altogether by loading the model before the pipeline\r\n \r\n ```python\r\n model =AutoModelForXXX.from_pretrained(\"t5-base\")\r\n model.config.task_specific_params = None\r\n \r\n pipe = pipeline(task=\"summarization\", model=model, tokenizer=tokenizer)\r\n ```\r\n \r\n Would either solution work for you ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.13.0.dev20220709 (False)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@LysandreJik @patrickvonplaten, @Narsil, @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
It is actually modified from [FID](https://github.com/facebookresearch/FiD). In the code, I give model, inherit from `T5ForConditionalGeneration` . And I also count the number of using `forward` some layer, it seems that it shows different number of using `forward`.
```
class FiDT5(transformers.T5ForConditionalGeneration):
def __init__(self, config):
...
def generate(self, input_ids, attention_mask, max_length):
self.encoder.n_passages = input_ids.size(1)
return super().generate(
input_ids=input_ids.view(input_ids.size(0), -1),
attention_mask=attention_mask.view(attention_mask.size(0), -1),
max_length=max_length
)
t5 = transformers.T5ForConditionalGeneration.from_pretrained('t5-base')
model = FiDT5(t5.config)
model.load_t5(t5.state_dict())
for i, batch in enumerate(dataloader):
(idx, _, _, context_ids, context_mask) = batch
outputs = model.generate(
input_ids=context_ids.cuda(),
attention_mask=context_mask.cuda(),
max_length=50,
)
print(outputs)
```
### Expected behavior
In two same batch, it print
```
tensor([[ 0, 22789, 9, 3038, 16924, 2060, 1]], device='cuda:0')
tensor([[ 0, 17724, 5500, 7059, 1]], device='cuda:0')
```
And it actuallly goes different number of layer, for example, in the first batch, it goes 288 times, but in the second, it goes 216 times.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19447/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19446
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19446/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19446/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19446/events
|
https://github.com/huggingface/transformers/issues/19446
| 1,402,613,509
|
I_kwDOCUB6oc5Tmi8F
| 19,446
|
Add LongT5 to AutoConfig
|
{
"login": "robbohua",
"id": 97416182,
"node_id": "U_kgDOBc5z9g",
"avatar_url": "https://avatars.githubusercontent.com/u/97416182?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robbohua",
"html_url": "https://github.com/robbohua",
"followers_url": "https://api.github.com/users/robbohua/followers",
"following_url": "https://api.github.com/users/robbohua/following{/other_user}",
"gists_url": "https://api.github.com/users/robbohua/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robbohua/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robbohua/subscriptions",
"organizations_url": "https://api.github.com/users/robbohua/orgs",
"repos_url": "https://api.github.com/users/robbohua/repos",
"events_url": "https://api.github.com/users/robbohua/events{/privacy}",
"received_events_url": "https://api.github.com/users/robbohua/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @robbohua -- I can run the script you shared without problems on my end. What version of `transformers` are you using? :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,669
| 1,668
|
NONE
| null |
### System Info
config = AutoConfig.from_pretrained("google/long-t5-local-base")
gives a KeyError: 'longt5'
I am trying to run this as it's part of SentenceTransformer package when initialising a search model - could longT5 be added to AutoConfig?
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
config = AutoConfig.from_pretrained("google/long-t5-local-base")
### Expected behavior
The config is loaded
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19446/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19445
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19445/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19445/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19445/events
|
https://github.com/huggingface/transformers/issues/19445
| 1,402,420,923
|
I_kwDOCUB6oc5Tlz67
| 19,445
|
Anything but plain "greedy" search "not implemented for 'Half'"
|
{
"login": "auwsom",
"id": 25093612,
"node_id": "MDQ6VXNlcjI1MDkzNjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/25093612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/auwsom",
"html_url": "https://github.com/auwsom",
"followers_url": "https://api.github.com/users/auwsom/followers",
"following_url": "https://api.github.com/users/auwsom/following{/other_user}",
"gists_url": "https://api.github.com/users/auwsom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/auwsom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/auwsom/subscriptions",
"organizations_url": "https://api.github.com/users/auwsom/orgs",
"repos_url": "https://api.github.com/users/auwsom/repos",
"events_url": "https://api.github.com/users/auwsom/events{/privacy}",
"received_events_url": "https://api.github.com/users/auwsom/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey @auwsom ! \r\nThanks for your message! Indeed it would be better to support all the possible sampling procedures for 8-bit models πͺ \r\nThere is definitely something around half-precision logits and `generate` that needs a closer look! \r\nAlso this seems to be a duplicate of https://github.com/TimDettmers/bitsandbytes/issues/42#issuecomment-1272877078 - so tagging the issue here \r\n\r\nWill look into it ASAP! ",
"Hey @auwsom !\r\nThanks for your patience!\r\nIt appears that the workaroud is pretty much straightforward, could you run `generate` with `inputs_ids` set to a GPU device? For example by making sure that: \r\n```\r\ninput_ids = input_ids.to('cuda')\r\n```\r\n`generate` yields an error since by instantiating a model with `device_map=auto` forces the output of the model to be on the same device as the input. In the snippet in https://github.com/TimDettmers/bitsandbytes/issues/42#issuecomment-1272877078 the `input_ids` are set on the `cpu`. I believe that making sure that these are on the GPU should do the trick for you, before waiting a proper fix to be merged in #19468 ? Could you confirm that this workaround fixes your issue? Thanks!",
"@younesbelkada yes, this works on the example notebook on Colab. Thanks!"
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
### System Info
`transformers-cli env`=
```
Traceback (most recent call last):
File "/home/user/.local/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/home/user/.local/lib/python3.10/site-packages/transformers/commands/transformers_cli.py", line 24, in <module>
from .pt_to_tf import PTtoTFCommand
File "/home/user/.local/lib/python3.10/site-packages/transformers/commands/pt_to_tf.py", line 21, in <module>
from datasets import load_dataset
ModuleNotFoundError: No module named 'datasets'
```
https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing
### Who can help?
@sgugger, @patil-suraj
Im following:
https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F
from:
https://huggingface.co/blog/hf-bitsandbytes-integration
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I love being able to use the int8 models! And being able to run the big LLMs on Colab. However, I just discovered that only the default search algo works. This is too bad, because for a chat application, the TopK sampling provides much more natural variation.
Beam:
RuntimeError: "log_softmax_lastdim_kernel_impl" not implemented for 'Half'
Topk:
RuntimeError: "topk_cpu" not implemented for 'Half'
### Expected behavior
being able to use TopK and TopP sampling on int8 optimized models
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19445/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19445/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19444
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19444/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19444/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19444/events
|
https://github.com/huggingface/transformers/pull/19444
| 1,402,379,379
|
PR_kwDOCUB6oc5AdB5X
| 19,444
|
Syntax issues (lines 126, 203) Documentation: @sgugger
|
{
"login": "kant",
"id": 32717,
"node_id": "MDQ6VXNlcjMyNzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kant",
"html_url": "https://github.com/kant",
"followers_url": "https://api.github.com/users/kant/followers",
"following_url": "https://api.github.com/users/kant/following{/other_user}",
"gists_url": "https://api.github.com/users/kant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kant/subscriptions",
"organizations_url": "https://api.github.com/users/kant/orgs",
"repos_url": "https://api.github.com/users/kant/repos",
"events_url": "https://api.github.com/users/kant/events{/privacy}",
"received_events_url": "https://api.github.com/users/kant/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for your contribution!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Syntax issues (lines 126, 203)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
No previous issue related
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@julien-c
@donelianc
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19444/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19444",
"html_url": "https://github.com/huggingface/transformers/pull/19444",
"diff_url": "https://github.com/huggingface/transformers/pull/19444.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19444.patch",
"merged_at": 1665490461000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19443
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19443/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19443/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19443/events
|
https://github.com/huggingface/transformers/pull/19443
| 1,402,374,659
|
PR_kwDOCUB6oc5AdA-W
| 19,443
|
Attention mask fixed. Documentation: @sgugger
|
{
"login": "kant",
"id": 32717,
"node_id": "MDQ6VXNlcjMyNzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kant",
"html_url": "https://github.com/kant",
"followers_url": "https://api.github.com/users/kant/followers",
"following_url": "https://api.github.com/users/kant/following{/other_user}",
"gists_url": "https://api.github.com/users/kant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kant/subscriptions",
"organizations_url": "https://api.github.com/users/kant/orgs",
"repos_url": "https://api.github.com/users/kant/repos",
"events_url": "https://api.github.com/users/kant/events{/privacy}",
"received_events_url": "https://api.github.com/users/kant/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19443). All of your documentation changes will be reflected on that endpoint.",
"Same as your other PRs, there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
# What does this PR do?
* Attention mask fixed (line 217)
* typo fixed (paragraph 326)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
No previous issue related
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@osanseviero
@Narsil
@ydshieh
@sgugger
@omarespejel
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19443/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19443/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19443",
"html_url": "https://github.com/huggingface/transformers/pull/19443",
"diff_url": "https://github.com/huggingface/transformers/pull/19443.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19443.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19442
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19442/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19442/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19442/events
|
https://github.com/huggingface/transformers/pull/19442
| 1,402,365,661
|
PR_kwDOCUB6oc5Ac_OI
| 19,442
|
Syntax issues Documentation: @sgugger
|
{
"login": "kant",
"id": 32717,
"node_id": "MDQ6VXNlcjMyNzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kant",
"html_url": "https://github.com/kant",
"followers_url": "https://api.github.com/users/kant/followers",
"following_url": "https://api.github.com/users/kant/following{/other_user}",
"gists_url": "https://api.github.com/users/kant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kant/subscriptions",
"organizations_url": "https://api.github.com/users/kant/orgs",
"repos_url": "https://api.github.com/users/kant/repos",
"events_url": "https://api.github.com/users/kant/events{/privacy}",
"received_events_url": "https://api.github.com/users/kant/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"It seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?",
"Permissions refreshed @sgugger "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Syntax issues (lines 497, 526)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Resolves no previous issue
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel
@amyeroberts
@yharyarias
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19442/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19442",
"html_url": "https://github.com/huggingface/transformers/pull/19442",
"diff_url": "https://github.com/huggingface/transformers/pull/19442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19442.patch",
"merged_at": 1665577735000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19441
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19441/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19441/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19441/events
|
https://github.com/huggingface/transformers/pull/19441
| 1,402,357,690
|
PR_kwDOCUB6oc5Ac9rh
| 19,441
|
[WIP] Add type hints for Lxmert (TF)
|
{
"login": "elusenji",
"id": 87298621,
"node_id": "MDQ6VXNlcjg3Mjk4NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/87298621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elusenji",
"html_url": "https://github.com/elusenji",
"followers_url": "https://api.github.com/users/elusenji/followers",
"following_url": "https://api.github.com/users/elusenji/following{/other_user}",
"gists_url": "https://api.github.com/users/elusenji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elusenji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elusenji/subscriptions",
"organizations_url": "https://api.github.com/users/elusenji/orgs",
"repos_url": "https://api.github.com/users/elusenji/repos",
"events_url": "https://api.github.com/users/elusenji/events{/privacy}",
"received_events_url": "https://api.github.com/users/elusenji/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Rocketknight1 Is there anything else I need to add to this PR?",
"@elusenji No, sorry! I meant to merge it after the tests passed but lost track of it yesterday. Doing it now, and thanks again!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds type hints to the Lxmert model for TensorFlow.
Models:
Lxmert: @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19441/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19441",
"html_url": "https://github.com/huggingface/transformers/pull/19441",
"diff_url": "https://github.com/huggingface/transformers/pull/19441.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19441.patch",
"merged_at": 1665672807000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19440
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19440/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19440/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19440/events
|
https://github.com/huggingface/transformers/pull/19440
| 1,402,353,662
|
PR_kwDOCUB6oc5Ac86Q
| 19,440
|
Backtick fixed (paragraph 68) Documentation: @sgugger
|
{
"login": "kant",
"id": 32717,
"node_id": "MDQ6VXNlcjMyNzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kant",
"html_url": "https://github.com/kant",
"followers_url": "https://api.github.com/users/kant/followers",
"following_url": "https://api.github.com/users/kant/following{/other_user}",
"gists_url": "https://api.github.com/users/kant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kant/subscriptions",
"organizations_url": "https://api.github.com/users/kant/orgs",
"repos_url": "https://api.github.com/users/kant/repos",
"events_url": "https://api.github.com/users/kant/events{/privacy}",
"received_events_url": "https://api.github.com/users/kant/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
* Backtick fixed (paragraph 68)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Resolves no previous issue
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@omarespejel
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19440/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19440",
"html_url": "https://github.com/huggingface/transformers/pull/19440",
"diff_url": "https://github.com/huggingface/transformers/pull/19440.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19440.patch",
"merged_at": 1665406034000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19439
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19439/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19439/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19439/events
|
https://github.com/huggingface/transformers/pull/19439
| 1,402,343,559
|
PR_kwDOCUB6oc5Ac6-5
| 19,439
|
Wrap VisualBERT integration test forward passes with torch.no_grad()
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in VisualBERT integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19439/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19439",
"html_url": "https://github.com/huggingface/transformers/pull/19439",
"diff_url": "https://github.com/huggingface/transformers/pull/19439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19439.patch",
"merged_at": 1665428076000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19438
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19438/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19438/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19438/events
|
https://github.com/huggingface/transformers/pull/19438
| 1,402,341,639
|
PR_kwDOCUB6oc5Ac6oI
| 19,438
|
Wrap RoFormer integration test forward passes with torch.no_grad()
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in RoFormer integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19438/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19438",
"html_url": "https://github.com/huggingface/transformers/pull/19438",
"diff_url": "https://github.com/huggingface/transformers/pull/19438.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19438.patch",
"merged_at": 1665428094000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19437
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19437/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19437/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19437/events
|
https://github.com/huggingface/transformers/pull/19437
| 1,402,332,769
|
PR_kwDOCUB6oc5Ac47W
| 19,437
|
Syntax issues (paragraphs 122, 130, 147, 155) Documentation: @sgugger
|
{
"login": "kant",
"id": 32717,
"node_id": "MDQ6VXNlcjMyNzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kant",
"html_url": "https://github.com/kant",
"followers_url": "https://api.github.com/users/kant/followers",
"following_url": "https://api.github.com/users/kant/following{/other_user}",
"gists_url": "https://api.github.com/users/kant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kant/subscriptions",
"organizations_url": "https://api.github.com/users/kant/orgs",
"repos_url": "https://api.github.com/users/kant/repos",
"events_url": "https://api.github.com/users/kant/events{/privacy}",
"received_events_url": "https://api.github.com/users/kant/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Same as on your other PR, it seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?",
"Permissions refreshed @sgugger ",
"Hi @kant It didn't work but I should have fixed the issue on our side. Could you make sure to accept the suggestion above?",
"I accept the above suggestion, @sgugger ",
"So please click the button to commit it.",
"confirmed commit @sgugger "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
* Syntax issues (paragraphs 122, 130, 147, 155): `preentramiento` > `preentrenamiento`
* semantic issue (paragraph 220 & 232 & 252)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Resolves no previous issue
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
@omarespejel
@ignacioct
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19437/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19437",
"html_url": "https://github.com/huggingface/transformers/pull/19437",
"diff_url": "https://github.com/huggingface/transformers/pull/19437.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19437.patch",
"merged_at": 1665595091000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19436
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19436/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19436/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19436/events
|
https://github.com/huggingface/transformers/pull/19436
| 1,402,296,788
|
PR_kwDOCUB6oc5AcyBJ
| 19,436
|
Fixed duplicated line (paragraph #83) Documentation: @sgugger
|
{
"login": "kant",
"id": 32717,
"node_id": "MDQ6VXNlcjMyNzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kant",
"html_url": "https://github.com/kant",
"followers_url": "https://api.github.com/users/kant/followers",
"following_url": "https://api.github.com/users/kant/following{/other_user}",
"gists_url": "https://api.github.com/users/kant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kant/subscriptions",
"organizations_url": "https://api.github.com/users/kant/orgs",
"repos_url": "https://api.github.com/users/kant/repos",
"events_url": "https://api.github.com/users/kant/events{/privacy}",
"received_events_url": "https://api.github.com/users/kant/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
No previous issue
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@omarespejel
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19436/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19436",
"html_url": "https://github.com/huggingface/transformers/pull/19436",
"diff_url": "https://github.com/huggingface/transformers/pull/19436.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19436.patch",
"merged_at": 1665407314000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19435
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19435/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19435/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19435/events
|
https://github.com/huggingface/transformers/issues/19435
| 1,402,293,542
|
I_kwDOCUB6oc5TlU0m
| 19,435
|
Make ViltModel forward method arguments (inputs_embeds and image_embeds) consistent
|
{
"login": "Infrared1029",
"id": 60873139,
"node_id": "MDQ6VXNlcjYwODczMTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/60873139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Infrared1029",
"html_url": "https://github.com/Infrared1029",
"followers_url": "https://api.github.com/users/Infrared1029/followers",
"following_url": "https://api.github.com/users/Infrared1029/following{/other_user}",
"gists_url": "https://api.github.com/users/Infrared1029/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Infrared1029/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Infrared1029/subscriptions",
"organizations_url": "https://api.github.com/users/Infrared1029/orgs",
"repos_url": "https://api.github.com/users/Infrared1029/repos",
"events_url": "https://api.github.com/users/Infrared1029/events{/privacy}",
"received_events_url": "https://api.github.com/users/Infrared1029/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi,\r\n\r\nThanks for reporting. I'm afraid we can't change it for backwards compatibility reasons, as people might have already used `image_embeds` with the way it is implemented now. Cc'ing @sgugger to confirm",
"Indeed. The documentation can be clarified however, to make this less surprising to users.",
"makes sense! i guess the docs could be indeed slightly clarified ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
### Feature request
The inputs_embeds arguemnt in Vilt expects the word embeddings (which is [cls_token_emb, token_embs, sep_token_emb]) of the input without adding the pos or token_type embs yet, while the image_embeds argument expects the processed embeddings [cls_token_emb, patch_embs] + pos_embs + token_types , which seems a bit inconsistent?
### Motivation
I was a bit confused why the following code was not producing the same output and then read the source code an realized the issue:
```
from transformers import ViltProcessor, ViltModel
from PIL import Image
import requests
# prepare image and text
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "hello world"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm")
model = ViltModel.from_pretrained("dandelin/vilt-b32-mlm")
inputs = processor(image, text, return_tensors="pt")
outputs_1 = model(**inputs)
# using embeds (the wrong way)
img_embs, img_mask, (patch_index, (h, w)) = model.embeddings.visual_embed(inputs['pixel_values'], inputs['pixel_mask'])
txt_embs = model.embeddings.text_embeddings(inputs['input_ids'])
outputs_2 = model(inputs_embeds=txt_embs, image_embeds=img_embs, pixel_mask=img_mask)
# seems to be the correct way
img_embs, img_mask, (patch_index, (h, w)) = model.embeddings.visual_embed(inputs['pixel_values'], inputs['pixel_mask'])
txt_embs = txt_emb = model.embeddings.text_embeddings.word_embeddings(inputs['input_ids']) # word_embeddings instead
outputs_3 = model(inputs_embeds=txt_embs, image_embeds=img_embs, pixel_mask=img_mask)
```
outputs_2 and outputs_3 are (almost) identical (they aren't exact matches but the total difference between all entries is around -0.009) while outputs_1 and outputs_2 are just different.
### Your contribution
Make both arguments either take the processed embeddings (after adding everything) or just the raw embeddings before adding anything. unless ofc, this is a deliberate decision made for reasons I don't know :o. If it is not intended, then I can try and submit a pr that fixes it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19435/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19434
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19434/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19434/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19434/events
|
https://github.com/huggingface/transformers/pull/19434
| 1,402,223,913
|
PR_kwDOCUB6oc5AckPD
| 19,434
|
Fixed a non-working hyperlink in the README.md file
|
{
"login": "MikailINTech",
"id": 45072645,
"node_id": "MDQ6VXNlcjQ1MDcyNjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/45072645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MikailINTech",
"html_url": "https://github.com/MikailINTech",
"followers_url": "https://api.github.com/users/MikailINTech/followers",
"following_url": "https://api.github.com/users/MikailINTech/following{/other_user}",
"gists_url": "https://api.github.com/users/MikailINTech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MikailINTech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MikailINTech/subscriptions",
"organizations_url": "https://api.github.com/users/MikailINTech/orgs",
"repos_url": "https://api.github.com/users/MikailINTech/repos",
"events_url": "https://api.github.com/users/MikailINTech/events{/privacy}",
"received_events_url": "https://api.github.com/users/MikailINTech/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@MikailINTech will close this then. sorry buddy",
"> The link is just a missing double slash, could you please just fix that? Your fix includes some metadata that is not useful.\r\n\r\nSorry ! fixed"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
The hyperlink to the community notebooks was outdated.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19434/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19434",
"html_url": "https://github.com/huggingface/transformers/pull/19434",
"diff_url": "https://github.com/huggingface/transformers/pull/19434.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19434.patch",
"merged_at": 1665421049000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19433
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19433/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19433/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19433/events
|
https://github.com/huggingface/transformers/pull/19433
| 1,402,022,312
|
PR_kwDOCUB6oc5Ab97J
| 19,433
|
Tranformers documentation translation to Italian #17459
|
{
"login": "draperkm",
"id": 80494835,
"node_id": "MDQ6VXNlcjgwNDk0ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/80494835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/draperkm",
"html_url": "https://github.com/draperkm",
"followers_url": "https://api.github.com/users/draperkm/followers",
"following_url": "https://api.github.com/users/draperkm/following{/other_user}",
"gists_url": "https://api.github.com/users/draperkm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/draperkm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/draperkm/subscriptions",
"organizations_url": "https://api.github.com/users/draperkm/orgs",
"repos_url": "https://api.github.com/users/draperkm/repos",
"events_url": "https://api.github.com/users/draperkm/events{/privacy}",
"received_events_url": "https://api.github.com/users/draperkm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,665
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # #17459
Italian translation of transformes/docs/source/en/perf_training_tpu.mdx
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19433/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19433",
"html_url": "https://github.com/huggingface/transformers/pull/19433",
"diff_url": "https://github.com/huggingface/transformers/pull/19433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19433.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19432
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19432/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19432/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19432/events
|
https://github.com/huggingface/transformers/pull/19432
| 1,402,004,504
|
PR_kwDOCUB6oc5Ab6ib
| 19,432
|
Removed XLMModel inheritance from FlaubertModel(torch+tf)
|
{
"login": "D3xter1922",
"id": 59790120,
"node_id": "MDQ6VXNlcjU5NzkwMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/59790120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/D3xter1922",
"html_url": "https://github.com/D3xter1922",
"followers_url": "https://api.github.com/users/D3xter1922/followers",
"following_url": "https://api.github.com/users/D3xter1922/following{/other_user}",
"gists_url": "https://api.github.com/users/D3xter1922/gists{/gist_id}",
"starred_url": "https://api.github.com/users/D3xter1922/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D3xter1922/subscriptions",
"organizations_url": "https://api.github.com/users/D3xter1922/orgs",
"repos_url": "https://api.github.com/users/D3xter1922/repos",
"events_url": "https://api.github.com/users/D3xter1922/events{/privacy}",
"received_events_url": "https://api.github.com/users/D3xter1922/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @sgugger I am failing some tests, can you please help me with this?",
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger Hi. esm model test is failing but I am not able to figure out why. Can you please help?\r\n",
"Arf, sorry I misled you. The copied from came from the code of XLM (they are copied from BERT actually) and you need to keep them. Sorry about that!",
"Thank you for your help!"
] | 1,665
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
related to #19303
Removed XLMModel inheritance from FlaubertModel for pytorch and tensorflow
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19432/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19432",
"html_url": "https://github.com/huggingface/transformers/pull/19432",
"diff_url": "https://github.com/huggingface/transformers/pull/19432.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19432.patch",
"merged_at": 1666027530000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19431
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19431/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19431/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19431/events
|
https://github.com/huggingface/transformers/pull/19431
| 1,401,994,657
|
PR_kwDOCUB6oc5Ab4tH
| 19,431
|
Make bert_japanese and cpm independent of their inherited modules
|
{
"login": "Davidy22",
"id": 872968,
"node_id": "MDQ6VXNlcjg3Mjk2OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/872968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Davidy22",
"html_url": "https://github.com/Davidy22",
"followers_url": "https://api.github.com/users/Davidy22/followers",
"following_url": "https://api.github.com/users/Davidy22/following{/other_user}",
"gists_url": "https://api.github.com/users/Davidy22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Davidy22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Davidy22/subscriptions",
"organizations_url": "https://api.github.com/users/Davidy22/orgs",
"repos_url": "https://api.github.com/users/Davidy22/repos",
"events_url": "https://api.github.com/users/Davidy22/events{/privacy}",
"received_events_url": "https://api.github.com/users/Davidy22/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Not going to let some hash collision stop me from publishing something that passes everything else.\r\n\r\nOf note, I actually removed a test from the japanese test suite that appeared to be checking to make sure that it was inheriting from the module that this pr is decoupling the japanese bert from. Implication I suspect being that the original author especially cared that this relation would be in place and enforced.",
"Well, changing the import source did the job, the quick skim and assumption I made from the error wasn't quite on the mark",
"Thanks again for your contribution!",
"Probably going to scour the warnings as mentioned last time next, having some trouble with locally running tests but ci probably has my back"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Another step towards completion of #19303
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19431/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19431",
"html_url": "https://github.com/huggingface/transformers/pull/19431",
"diff_url": "https://github.com/huggingface/transformers/pull/19431.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19431.patch",
"merged_at": 1665504557000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19430
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19430/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19430/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19430/events
|
https://github.com/huggingface/transformers/issues/19430
| 1,401,992,451
|
I_kwDOCUB6oc5TkLUD
| 19,430
|
Create TF port of BigBird
|
{
"login": "E-Aho",
"id": 46936677,
"node_id": "MDQ6VXNlcjQ2OTM2Njc3",
"avatar_url": "https://avatars.githubusercontent.com/u/46936677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/E-Aho",
"html_url": "https://github.com/E-Aho",
"followers_url": "https://api.github.com/users/E-Aho/followers",
"following_url": "https://api.github.com/users/E-Aho/following{/other_user}",
"gists_url": "https://api.github.com/users/E-Aho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/E-Aho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/E-Aho/subscriptions",
"organizations_url": "https://api.github.com/users/E-Aho/orgs",
"repos_url": "https://api.github.com/users/E-Aho/repos",
"events_url": "https://api.github.com/users/E-Aho/events{/privacy}",
"received_events_url": "https://api.github.com/users/E-Aho/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Currently starting to work on this :)"
] | 1,665
| 1,665
| null |
CONTRIBUTOR
| null |
### Model description
[BigBird](https://arxiv.org/abs/2007.14062) is an open source transformer model architecture for longer sequences, and is implemented in the Transformer library already in PyTorch and Flax, but not yet in TensorFlow. This issue tracks the implementation of a TensorFlow version of the model.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[Location of current implementations](https://github.com/huggingface/transformers/tree/main/src/transformers/models/big_bird)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19430/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/19429
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19429/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19429/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19429/events
|
https://github.com/huggingface/transformers/pull/19429
| 1,401,959,404
|
PR_kwDOCUB6oc5AbyIC
| 19,429
|
fixed grammatical omissions and fixed typos.
|
{
"login": "YOGENDERSS",
"id": 81403270,
"node_id": "MDQ6VXNlcjgxNDAzMjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/81403270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YOGENDERSS",
"html_url": "https://github.com/YOGENDERSS",
"followers_url": "https://api.github.com/users/YOGENDERSS/followers",
"following_url": "https://api.github.com/users/YOGENDERSS/following{/other_user}",
"gists_url": "https://api.github.com/users/YOGENDERSS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YOGENDERSS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YOGENDERSS/subscriptions",
"organizations_url": "https://api.github.com/users/YOGENDERSS/orgs",
"repos_url": "https://api.github.com/users/YOGENDERSS/repos",
"events_url": "https://api.github.com/users/YOGENDERSS/events{/privacy}",
"received_events_url": "https://api.github.com/users/YOGENDERSS/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This PR fixes a typo or improves the docs.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19429). All of your documentation changes will be reflected on that endpoint.",
"@sgugger could you please review this sir.",
"@codePerfectPlus sir could you please review if possible."
] | 1,665
| 1,666
| 1,666
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19429/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19429",
"html_url": "https://github.com/huggingface/transformers/pull/19429",
"diff_url": "https://github.com/huggingface/transformers/pull/19429.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19429.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19428
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19428/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19428/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19428/events
|
https://github.com/huggingface/transformers/pull/19428
| 1,401,928,356
|
PR_kwDOCUB6oc5Abse6
| 19,428
|
Small fix for `AutoTokenizer` using opt model. Use `GPT2TokenizerFast`
|
{
"login": "clementapa",
"id": 45719060,
"node_id": "MDQ6VXNlcjQ1NzE5MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/45719060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clementapa",
"html_url": "https://github.com/clementapa",
"followers_url": "https://api.github.com/users/clementapa/followers",
"following_url": "https://api.github.com/users/clementapa/following{/other_user}",
"gists_url": "https://api.github.com/users/clementapa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clementapa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clementapa/subscriptions",
"organizations_url": "https://api.github.com/users/clementapa/orgs",
"repos_url": "https://api.github.com/users/clementapa/repos",
"events_url": "https://api.github.com/users/clementapa/events{/privacy}",
"received_events_url": "https://api.github.com/users/clementapa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR allows to use GPT2TokenizerFast for OPT when using from_pretrained method of AutoTokenizer.
I checked that both give the same thing after my change.
```
from transformers import AutoTokenizer
tokenizer_slow = AutoTokenizer.from_pretrained("facebook/opt-350m", use_fast=False)
> PreTrainedTokenizer(name_or_path='facebook/opt-350m', vocab_size=50265, model_max_len=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=True)})
tokenizer_fast = AutoTokenizer.from_pretrained("facebook/opt-350m", use_fast=True)
> PreTrainedTokenizerFast(name_or_path='facebook/opt-350m', vocab_size=50265, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=True)})
tokenizer_slow == tokenizer_fast
> False
text='Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.'
token_slow = tokenizer_slow(text)
token_fast = tokenizer_fast(text)
token_slow == token_fast
> True
```
However, the doc of OPT https://huggingface.co/docs/transformers/v4.22.2/en/model_doc/opt#overview advises not to use FastTokenizer for OPT.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19428/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19428",
"html_url": "https://github.com/huggingface/transformers/pull/19428",
"diff_url": "https://github.com/huggingface/transformers/pull/19428.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19428.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19427
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19427/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19427/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19427/events
|
https://github.com/huggingface/transformers/pull/19427
| 1,401,914,756
|
PR_kwDOCUB6oc5AbqDI
| 19,427
|
Adding the README_es.md and reference to it in the others files readme
|
{
"login": "Oussamaosman02",
"id": 109099115,
"node_id": "U_kgDOBoC4aw",
"avatar_url": "https://avatars.githubusercontent.com/u/109099115?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oussamaosman02",
"html_url": "https://github.com/Oussamaosman02",
"followers_url": "https://api.github.com/users/Oussamaosman02/followers",
"following_url": "https://api.github.com/users/Oussamaosman02/following{/other_user}",
"gists_url": "https://api.github.com/users/Oussamaosman02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oussamaosman02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oussamaosman02/subscriptions",
"organizations_url": "https://api.github.com/users/Oussamaosman02/orgs",
"repos_url": "https://api.github.com/users/Oussamaosman02/repos",
"events_url": "https://api.github.com/users/Oussamaosman02/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oussamaosman02/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sguggerI think it's done! :smiley: , If there is something else,tell me",
"@osanseviero I have read and accept the suggestions you make, only few small mistakes that i commented.Thanks a lot for answering so fast!",
"Hey there! I fixed the 2 suggestions you mentioned :) feel free to commit them now :D ",
"@osanseviero @sgugger everything clear and ok now. Have a nice day! :smile_cat: ",
"Can you just run `make fix-copies` on your branch to fix the CLI issue? There is probably one or two models out of sync between the READMEs.",
"I hope now everything is right, sorry for the mistakes :smiling_face_with_tear: ",
"No worries, looking good now. Thanks again!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR addede the readme.md file in spanish version and updated the other readme files to reference it correctly.
I made this because you have the docs in spanish lenguage but no readme was in spanish
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19427/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19427",
"html_url": "https://github.com/huggingface/transformers/pull/19427",
"diff_url": "https://github.com/huggingface/transformers/pull/19427.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19427.patch",
"merged_at": 1665507386000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19426
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19426/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19426/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19426/events
|
https://github.com/huggingface/transformers/pull/19426
| 1,401,913,676
|
PR_kwDOCUB6oc5Abp2T
| 19,426
|
made tokenization_roformer independent of bert
|
{
"login": "naveennamani",
"id": 19327609,
"node_id": "MDQ6VXNlcjE5MzI3NjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/19327609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naveennamani",
"html_url": "https://github.com/naveennamani",
"followers_url": "https://api.github.com/users/naveennamani/followers",
"following_url": "https://api.github.com/users/naveennamani/following{/other_user}",
"gists_url": "https://api.github.com/users/naveennamani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naveennamani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naveennamani/subscriptions",
"organizations_url": "https://api.github.com/users/naveennamani/orgs",
"repos_url": "https://api.github.com/users/naveennamani/repos",
"events_url": "https://api.github.com/users/naveennamani/events{/privacy}",
"received_events_url": "https://api.github.com/users/naveennamani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Gently pinging @sgugger @ArthurZucker for re-distribution here ",
"Will take care of it π",
"Don't worrk @ArthurZucker I'll review :-) This is linked to #19303 and I should have been tagged, not Patrick :-)",
"@sgugger when I opened the PR, in the PR template it showed me @patrickvonplaten is looking after roformer which I modified, so tagged him for the review.",
"No worries at all @naveennamani , but as @patrickvonplaten is very busy on plenty of other things and you PR should not make any actual changes so Patrck's input won't be necessary. I'll review it tomorrow :-)",
"Hi @sgugger , I've modified the comments as per your suggestion.",
"I missed it somehow, fixed it "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Relates to issue #19303
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19303
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19426/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19426",
"html_url": "https://github.com/huggingface/transformers/pull/19426",
"diff_url": "https://github.com/huggingface/transformers/pull/19426.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19426.patch",
"merged_at": 1665583990000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19425
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19425/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19425/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19425/events
|
https://github.com/huggingface/transformers/issues/19425
| 1,401,908,049
|
I_kwDOCUB6oc5Tj2tR
| 19,425
|
Error while converting BigBirdPegasus tensorflow checkpoints into pytorch model using "convert_bigbird_pegasus_tf_to_pytorch.py"
|
{
"login": "dehghanm",
"id": 28918699,
"node_id": "MDQ6VXNlcjI4OTE4Njk5",
"avatar_url": "https://avatars.githubusercontent.com/u/28918699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dehghanm",
"html_url": "https://github.com/dehghanm",
"followers_url": "https://api.github.com/users/dehghanm/followers",
"following_url": "https://api.github.com/users/dehghanm/following{/other_user}",
"gists_url": "https://api.github.com/users/dehghanm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dehghanm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dehghanm/subscriptions",
"organizations_url": "https://api.github.com/users/dehghanm/orgs",
"repos_url": "https://api.github.com/users/dehghanm/repos",
"events_url": "https://api.github.com/users/dehghanm/events{/privacy}",
"received_events_url": "https://api.github.com/users/dehghanm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
Hi,
I want to covert TensorFlow checkpoints generated from training a BigbirdPegasus model for the summarization task into a PyTorch model using the prepared script for it (i.e., "convert_bigbird_pegasus_tf_to_pytorch.py") which is located in the following URL:
https://github.com/huggingface/transformers/tree/main/src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py
During conversion, I faced the following error:
Traceback (most recent call last):
**File "convert_ckpt.py", line 212, in <module>
convert_bigbird_pegasus_ckpt_to_pytorch(args.tf_ckpt_path, args.save_dir, config_update=config_update)
File "convert_ckpt.py", line 202, in convert_bigbird_pegasus_ckpt_to_pytorch
torch_model = convert_bigbird_pegasus(tf_weights, config_update)
File "convert_ckpt.py", line 148, in convert_bigbird_pegasus
raise ValueError(f"could not find new key {new_k} in state dict. (converted from {k})")
ValueError: could not find new key model.decoder.layernorm_embedding.bias.Adafactor in state dict. (converted from pegasus/decoder/LayerNorm/beta/Adafactor)**
Can anybody help me with solving this error?
Thanks in advance
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19425/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19424
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19424/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19424/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19424/events
|
https://github.com/huggingface/transformers/pull/19424
| 1,401,851,779
|
PR_kwDOCUB6oc5AbeqG
| 19,424
|
Fix typo in image-classification/README.md
|
{
"login": "zhawe01",
"id": 15643982,
"node_id": "MDQ6VXNlcjE1NjQzOTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/15643982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhawe01",
"html_url": "https://github.com/zhawe01",
"followers_url": "https://api.github.com/users/zhawe01/followers",
"following_url": "https://api.github.com/users/zhawe01/following{/other_user}",
"gists_url": "https://api.github.com/users/zhawe01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhawe01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhawe01/subscriptions",
"organizations_url": "https://api.github.com/users/zhawe01/orgs",
"repos_url": "https://api.github.com/users/zhawe01/repos",
"events_url": "https://api.github.com/users/zhawe01/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhawe01/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Fix link typo of the following content.
PyTorch version, Trainer
PyTorch version, no Trainer
# What does this PR do?
Fixes a typo
## Who can review?
@sgugger @NielsRogge
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19424/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19424",
"html_url": "https://github.com/huggingface/transformers/pull/19424",
"diff_url": "https://github.com/huggingface/transformers/pull/19424.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19424.patch",
"merged_at": 1665407818000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19423
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19423/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19423/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19423/events
|
https://github.com/huggingface/transformers/issues/19423
| 1,401,845,861
|
I_kwDOCUB6oc5Tjnhl
| 19,423
|
Error while running deepseed in dreambooth
|
{
"login": "aniketgore",
"id": 22811139,
"node_id": "MDQ6VXNlcjIyODExMTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/22811139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aniketgore",
"html_url": "https://github.com/aniketgore",
"followers_url": "https://api.github.com/users/aniketgore/followers",
"following_url": "https://api.github.com/users/aniketgore/following{/other_user}",
"gists_url": "https://api.github.com/users/aniketgore/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aniketgore/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aniketgore/subscriptions",
"organizations_url": "https://api.github.com/users/aniketgore/orgs",
"repos_url": "https://api.github.com/users/aniketgore/repos",
"events_url": "https://api.github.com/users/aniketgore/events{/privacy}",
"received_events_url": "https://api.github.com/users/aniketgore/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### System Info
NotImplementedError: Could not run 'xformers::efficient_attention_forward_generic' with arguments from the 'CUDA' backend.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
NotImplementedError: Could not run 'xformers::efficient_attention_forward_generic' with arguments from the 'CUDA' backend.
### Expected behavior
NotImplementedError: Could not run 'xformers::efficient_attention_forward_generic' with arguments from the 'CUDA' backend.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19423/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19423/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19422
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19422/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19422/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19422/events
|
https://github.com/huggingface/transformers/issues/19422
| 1,401,828,823
|
I_kwDOCUB6oc5TjjXX
| 19,422
|
this error occurs how to fix it
|
{
"login": "rajatj86",
"id": 85836532,
"node_id": "MDQ6VXNlcjg1ODM2NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/85836532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajatj86",
"html_url": "https://github.com/rajatj86",
"followers_url": "https://api.github.com/users/rajatj86/followers",
"following_url": "https://api.github.com/users/rajatj86/following{/other_user}",
"gists_url": "https://api.github.com/users/rajatj86/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajatj86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajatj86/subscriptions",
"organizations_url": "https://api.github.com/users/rajatj86/orgs",
"repos_url": "https://api.github.com/users/rajatj86/repos",
"events_url": "https://api.github.com/users/rajatj86/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajatj86/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @rajatj86, it's unfortunate that this issue occurs. Unless you have important data in your Hugging Face cache, I would advise removing it.\r\n\r\nOtherwise, please update both your `transformers` and `huggingface_hub` versions and post the message here, as it should contain more information.\r\n\r\nThank you!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B8516D550>. warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.") torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B851838B0>. warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.") The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. Moving 6 files to the new cache system 0%| | 0/6 [00:02<?, ?it/s] There was a problem when trying to move your cache: File "transformers\utils\hub.py", line 1077, in <module> File "transformers\utils\hub.py", line 1040, in move_cache File "transformers\utils\hub.py", line 997, in move_to_new_cache File "huggingface_hub\file_download.py", line 841, in _create_relative_symlink Please file an issue at https://github.com/huggingface/transformers/issues/new/choose and copy paste this whole message and we will do our best to help.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19422/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19421
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19421/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19421/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19421/events
|
https://github.com/huggingface/transformers/pull/19421
| 1,401,819,662
|
PR_kwDOCUB6oc5AbYkT
| 19,421
|
Remove GPT-2 tokenizer dependancy from Deberta Tokenizers
|
{
"login": "RamitPahwa",
"id": 16895131,
"node_id": "MDQ6VXNlcjE2ODk1MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/16895131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RamitPahwa",
"html_url": "https://github.com/RamitPahwa",
"followers_url": "https://api.github.com/users/RamitPahwa/followers",
"following_url": "https://api.github.com/users/RamitPahwa/following{/other_user}",
"gists_url": "https://api.github.com/users/RamitPahwa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RamitPahwa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RamitPahwa/subscriptions",
"organizations_url": "https://api.github.com/users/RamitPahwa/orgs",
"repos_url": "https://api.github.com/users/RamitPahwa/repos",
"events_url": "https://api.github.com/users/RamitPahwa/events{/privacy}",
"received_events_url": "https://api.github.com/users/RamitPahwa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sgugger \r\nPlease review the above PR when you get a moment.",
"I raised a Clean PR #19551 for this work will close once that is merged, @sgugger Please review "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Hi @sgugger
Related to #19303 ,
- the GPT2Tokenizer dependency has been removed from DebertaTokenizer
- the GPT2TokenizerFast dependency has been removed from DebertaTokenizerFast
I ran` pytest tests/models/deberta/test_tokenization_deberta.py` which passed
Thanks for reviewing!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19421/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19421",
"html_url": "https://github.com/huggingface/transformers/pull/19421",
"diff_url": "https://github.com/huggingface/transformers/pull/19421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19421.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19420
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19420/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19420/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19420/events
|
https://github.com/huggingface/transformers/issues/19420
| 1,401,809,394
|
I_kwDOCUB6oc5Tjeny
| 19,420
|
Load config error, permission denied and EnvironmentError
|
{
"login": "created-Bi",
"id": 28759055,
"node_id": "MDQ6VXNlcjI4NzU5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/28759055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/created-Bi",
"html_url": "https://github.com/created-Bi",
"followers_url": "https://api.github.com/users/created-Bi/followers",
"following_url": "https://api.github.com/users/created-Bi/following{/other_user}",
"gists_url": "https://api.github.com/users/created-Bi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/created-Bi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/created-Bi/subscriptions",
"organizations_url": "https://api.github.com/users/created-Bi/orgs",
"repos_url": "https://api.github.com/users/created-Bi/repos",
"events_url": "https://api.github.com/users/created-Bi/events{/privacy}",
"received_events_url": "https://api.github.com/users/created-Bi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello! It seems like you don't have read access into the cache generated by huggingface?\r\n\r\nYou're getting a permission denied to read the file.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### System Info
_libgcc_mutex 0.1 main https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
_pytorch_select 0.2 gpu_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
aiohttp 3.8.3 pypi_0 pypi
aiosignal 1.2.0 pypi_0 pypi
async-timeout 4.0.2 pypi_0 pypi
asynctest 0.13.0 pypi_0 pypi
attrs 22.1.0 pypi_0 pypi
blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
blessed 1.19.1 pypi_0 pypi
boto3 1.16.7 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
botocore 1.19.9 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
bottleneck 1.3.4 py37hce1f21e_0
brotli 1.0.9 he6710b0_2
brotlipy 0.7.0 py37h27cfd23_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
bzip2 1.0.8 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ca-certificates 2022.07.19 h06a4308_0
certifi 2022.6.15 py37h06a4308_0
cffi 1.14.3 py37h261ae71_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
charset-normalizer 2.0.4 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
click 8.0.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cryptography 3.1.1 py37h1ba5d50_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cudatoolkit 10.2.89 hfd86e86_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cudnn 7.6.5 cuda10.2_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cycler 0.11.0 pyhd3eb1b0_0
datasets 2.5.2 pypi_0 pypi
dbus 1.13.18 hb2f20db_0
dill 0.3.5.1 pypi_0 pypi
docutils 0.14 py37_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
elastic-transport 8.1.2 pypi_0 pypi
elasticsearch 7.14.1 pypi_0 pypi
et_xmlfile 1.1.0 py37h06a4308_0
expat 2.4.4 h295c915_0
ffmpeg 4.2.2 h20bf706_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
filelock 3.7.1 pypi_0 pypi
flask 2.1.3 pypi_0 pypi
fontconfig 2.13.1 h6c09931_0
fonttools 4.34.4 pypi_0 pypi
freetype 2.11.0 h70c0345_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
frozenlist 1.3.1 pypi_0 pypi
fsspec 2022.8.2 pypi_0 pypi
gevent 21.12.0 pypi_0 pypi
gevent-websocket 0.10.1 pypi_0 pypi
giflib 5.2.1 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
glib 2.69.1 h4ff587b_1
gmp 6.2.1 h2531618_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
gnutls 3.6.15 he1e5248_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
gpustat 1.0.0 pypi_0 pypi
greenlet 1.1.2 pypi_0 pypi
gst-plugins-base 1.14.0 h8213a91_2
gstreamer 1.14.0 h28cd5cc_2
huggingface-hub 0.8.1 pypi_0 pypi
icu 58.2 he6710b0_3
idna 2.10 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
importlib-metadata 4.11.3 py37h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
intel-openmp 2020.2 254 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
itsdangerous 2.1.2 pypi_0 pypi
jinja2 3.1.2 pypi_0 pypi
jmespath 0.10.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
joblib 1.0.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jpeg 9e h7f8727e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
kiwisolver 1.4.4 pypi_0 pypi
lame 3.100 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
lcms2 2.12 h3be6417_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ld_impl_linux-64 2.33.1 h53a641e_7 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libedit 3.1.20191231 h14c3975_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libffi 3.3 he6710b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libgcc-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libgfortran-ng 7.5.0 ha8ba4b0_17
libgfortran4 7.5.0 ha8ba4b0_17
libidn2 2.3.2 h7f8727e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libopus 1.3.1 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libpng 1.6.37 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libstdcxx-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libtasn1 4.16.0 h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libtiff 4.2.0 h85742a9_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libunistring 0.9.10 h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libuuid 1.0.3 h7f8727e_2
libuv 1.40.0 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libvpx 1.7.0 h439df22_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libwebp 1.2.2 h55f646e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libwebp-base 1.2.2 h7f8727e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libxcb 1.15 h7f8727e_0
libxml2 2.9.10 hb55368b_3
lpips 0.1.4 pypi_0 pypi
lz4-c 1.9.3 h295c915_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
markupsafe 2.1.1 pypi_0 pypi
matplotlib 3.5.2 pypi_0 pypi
matplotlib-base 3.5.1 py37ha18d171_1
mkl 2020.2 256 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl-service 2.3.0 py37he8ac12f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_fft 1.2.0 py37h23d657b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_random 1.1.1 py37h0573a6f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
multidict 6.0.2 pypi_0 pypi
multiprocess 0.70.13 pypi_0 pypi
munkres 1.1.4 py_0
ncurses 6.2 he6710b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nettle 3.7.3 hbbd107a_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ninja 1.10.1 py37hfd86e86_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nltk 3.6.2 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numexpr 2.7.3 py37hb2eb853_0
numpy 1.19.2 py37h54aff64_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numpy-base 1.19.2 py37hfa32c7d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nvidia-ml-py 11.495.46 pypi_0 pypi
opencv-python 4.6.0.66 pypi_0 pypi
openh264 2.1.1 h4ff587b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
openpyxl 3.0.9 pyhd3eb1b0_0
openssl 1.1.1q h7f8727e_0
opentsne 0.6.2 pypi_0 pypi
packaging 21.3 pyhd3eb1b0_0
pandas 1.3.5 pypi_0 pypi
pcre 8.45 h295c915_0
pillow 9.2.0 pypi_0 pypi
pip 20.2.4 py37h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
psutil 5.9.2 pypi_0 pypi
pyarrow 9.0.0 pypi_0 pypi
pycparser 2.20 py_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyopenssl 19.1.0 py_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyparsing 3.0.9 py37h06a4308_0
pyqt 5.9.2 py37h05f1152_2
pysocks 1.7.1 py37_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
python 3.7.10 hdb3f193_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
python-dateutil 2.8.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pytorch-mutex 1.0 cpu pytorch-nightly
pytz 2022.1 py37h06a4308_0
pyyaml 6.0 pypi_0 pypi
qt 5.9.7 h5867ecd_1
readline 8.0 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
regex 2021.7.6 py37h7f8727e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
requests 2.28.1 pypi_0 pypi
responses 0.18.0 pypi_0 pypi
s3transfer 0.3.3 pyhd3eb1b0_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sacremoses 0.0.53 pypi_0 pypi
scikit-learn 1.0.2 pypi_0 pypi
scipy 1.7.3 pypi_0 pypi
seaborn 0.11.2 pyhd3eb1b0_0
sentence-transformers 2.0.0 pypi_0 pypi
sentencepiece 0.1.96 pypi_0 pypi
setuptools 50.3.0 py37h06a4308_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sip 4.19.8 py37hf484d3e_0
six 1.15.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sqlite 3.33.0 h62c20be_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
threadpoolctl 3.1.0 pypi_0 pypi
tk 8.6.10 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tokenizers 0.9.4 pypi_0 pypi
torch 1.11.0+cu113 pypi_0 pypi
torchaudio 0.11.0+cu113 pypi_0 pypi
torchvision 0.12.0+cu113 pypi_0 pypi
tornado 6.1 py37h27cfd23_0
tqdm 4.64.0 py37h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
transformers 4.2.1 pypi_0 pypi
typing_extensions 3.10.0.0 pyh06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
urllib3 1.26.10 pypi_0 pypi
wcwidth 0.2.5 pypi_0 pypi
werkzeug 2.0.1 pypi_0 pypi
wget 3.2 pypi_0 pypi
wheel 0.35.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
x264 1!157.20191217 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
xlrd 2.0.1 pypi_0 pypi
xlsxwriter 3.0.3 pyhd3eb1b0_0
xlwt 1.3.0 pypi_0 pypi
xxhash 3.0.0 pypi_0 pypi
xz 5.2.5 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
yarl 1.8.1 pypi_0 pypi
zhconv 1.4.3 pypi_0 pypi
zipp 3.5.0 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zlib 1.2.11 h7b6447c_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zope-event 4.5.0 pypi_0 pypi
zope-interface 5.4.0 pypi_0 pypi
zstd 1.4.9 haebb681_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction


### Expected behavior
load the config correctly
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19420/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19419
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19419/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19419/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19419/events
|
https://github.com/huggingface/transformers/issues/19419
| 1,401,765,846
|
I_kwDOCUB6oc5TjT_W
| 19,419
|
Stacktrace migrating cache opening OpenAI Whisper
|
{
"login": "danielzgtg",
"id": 25646384,
"node_id": "MDQ6VXNlcjI1NjQ2Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/25646384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielzgtg",
"html_url": "https://github.com/danielzgtg",
"followers_url": "https://api.github.com/users/danielzgtg/followers",
"following_url": "https://api.github.com/users/danielzgtg/following{/other_user}",
"gists_url": "https://api.github.com/users/danielzgtg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielzgtg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielzgtg/subscriptions",
"organizations_url": "https://api.github.com/users/danielzgtg/orgs",
"repos_url": "https://api.github.com/users/danielzgtg/repos",
"events_url": "https://api.github.com/users/danielzgtg/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielzgtg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @danielzgtg π \r\n\r\nI believe it is the same issue as in https://github.com/huggingface/transformers/issues/19384, with the same resolution as listed there -- it should be fixed today/tomorrow, with the new release of `transformers`\r\n\r\nMeanwhile, you may be able to get rid of that error if you install `huggingface_hub==0.9.0` :)\r\n\r\n(cc @LysandreJik )",
"Indeed, upgrading `huggingface_hub` to the latest version (which is 0.10.0!) should solve the error.",
"> may be able to get rid of that error if you install huggingface_hub==0.9.0\r\n\r\n> (which is 0.10.0!) should solve the error.\r\n\r\nSo which one is it? 0.9.0 or 0.10.0? I had 0.10.0:\r\n\r\n```\r\nCollecting huggingface-hub<1.0,>=0.9.0\r\n Downloading huggingface_hub-0.10.0-py3-none-any.whl (163 kB)\r\n ββββββββββββββββββββββββββββββββββββββββ 163.5/163.5 kB 63.2 MB/s eta 0:00:00\r\n```\r\n\r\nAnyway, I don't yet know how to reproduce this message. It only appeared for me once. I do hope that I can any necessary cache migration to succeed though",
"> I had 0.10.0\r\n\r\nThat's why I suggested `0.9.0` :) It seems to be a problem due to a temporary version mismatch between `transformers` and `huggingface_hub` (see https://github.com/huggingface/transformers/pull/19244)\r\n\r\nIn any case, it's very hard to reproduce the issue -- it seems to happen when migrating from the old cache version (i.e. if you had used `transformers<=4.21` in your system) into the new cache version, which only happens once per system, AND have incompatible versions of `transformers`+`huggingface_hub`. New pip installs shouldn't see this error, even if they have an old cache",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### System Info
```
(venv) home@daniel-tablet1:~/PycharmProjects/whisper$ transformers-cli env
Traceback (most recent call last):
File "/home/home/PycharmProjects/whisper/venv/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/home/home/PycharmProjects/whisper/venv/lib/python3.10/site-packages/transformers/commands/transformers_cli.py", line 24, in <module>
from .pt_to_tf import PTtoTFCommand
File "/home/home/PycharmProjects/whisper/venv/lib/python3.10/site-packages/transformers/commands/pt_to_tf.py", line 21, in <module>
from datasets import load_dataset
ModuleNotFoundError: No module named 'datasets'
```
```
(venv) home@daniel-tablet1:~/PycharmProjects/whisper$ git show-ref
9e653bd0ea0f1e9493cb4939733e9de249493cfb refs/heads/main
9e653bd0ea0f1e9493cb4939733e9de249493cfb refs/remotes/origin/HEAD
9e653bd0ea0f1e9493cb4939733e9de249493cfb refs/remotes/origin/main
```
<details>
<summary>.cache/huggingface</summary>
```
(venv) home@daniel-tablet1:~/.cache/huggingface$ find
.
./hub
./hub/273c26d519eca3d37b6907fca55b4570903094837c1e88f41544c2d7a1ef9b36.2581b5124d154f09d9841e3f106147b17807bdc9b30338c2f6b065a7119328b8.lock
./hub/version.txt
./hub/273c26d519eca3d37b6907fca55b4570903094837c1e88f41544c2d7a1ef9b36.2581b5124d154f09d9841e3f106147b17807bdc9b30338c2f6b065a7119328b8
./hub/273c26d519eca3d37b6907fca55b4570903094837c1e88f41544c2d7a1ef9b36.2581b5124d154f09d9841e3f106147b17807bdc9b30338c2f6b065a7119328b8.json
./transformers
./transformers/91e9fe874e06c44883b535d6c950b8b89d6eaa3298d8e7fb3b2c78039e9f8b7b.66b9637a52aa11e9285cdd6e668cc0df14b3bcf0b6674cf3ba5353c542649637.json
./transformers/45c3f7a79a80e1cf0a489e5c62b43f173c15db47864303a55d623bb3c96f72a5.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
./transformers/e6eeef886a597ad9496f7a38414dc332f49fd0e18bc279439f19f6ef80a6830f.150cd75d571e557b7d1dc1a3fd74c0ebe252b855739e47c8040a11a362b2f912.json
./transformers/d0404704aff7a47b8d8a30573cb4f67045bf89101e3200146c2a1a55f182d380.a3dc3058cc957fef449bfe2a4db7cdca4c9b0f7c0b2a9c4bc6228ba024621a78.h5
./transformers/775efbdc2152093295bc5824dee96da82a5f3c1f218dfface1b8cef3094bdf8f.c719a806caef7d36ec0185f14b3b5fa727d919f924abe35622b4b7147bfbb8c7.h5
./transformers/83261b0c74c462e53d6367de0646b1fca07d0f15f1be045156b9cf8c71279cc9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock
./transformers/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.b589da7dac64196f9764abaf2c4c7e507cec8b14b96da3ef270d924f155062de.lock
./transformers/748a176e9d151dcad63a27974db8b8f665f286954cfbb77008ca42163419ff66.6a323429db2b09562cffdb9bc72d09d08bccbca1d832434b183b867864c30526.h5.lock
./transformers/c0abea01d3725dc3c06370cced02822e09a715c98c62346f5ec9b730361df18d.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
./transformers/3b13d6000bf0faa8f68bbbfabc744100e2abc27c7c8612bf1269bd79fd94fa3d.3df0d73ec7fbb471c0502e9bf5b52515f84d3af812b70f08e7ce8200d268c366.h5.lock
./transformers/e727ad0b5b727e965ac92d0d987189dd8baca246cc5d9cd2d2991f5bd3a286c5.5fd7d9eb368cd9cb55495ec20862b533efee02e1e074c3bc7bf451b25b4fe59e
./transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f
./transformers/6e443a2ed9a4346cca5f4fb9986a60fea956b0f74694596632e5d37302cd2d51.6e9c56f90d0ccc4bb88c2360463bcbd3a5d5688b9ba81e6bcea7316ac803e5ca.json
./transformers/4ac94ea87276ca5a0c5bca5048e2dc4ff34d8c0cc5d48e4205bf5390f7290fd1.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
./transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.37395cee442ab11005bcd270f3c34464dc1704b715b5d7d52b1a461abe3b9e4e.json
./transformers/41c2fc682e5acee0c74105c9950da8f133eef8879ef0e2e2edd37c4d237da2ee.ffac6e54739b6e6cd3d9e8b6671a9514d3b1b755459a51fdc1749d110e5a5a1d.h5.lock
./transformers/375a542f256f8537243b49f47691b6b370e74950f71552629ff41b4025cdc719.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d.lock
./transformers/16b07bde9fc789a1d5bafeeb361edfe9e4df30077f3f8150f33130800dd9fab7.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.json
./transformers/e584858c24b9c062296d83fd0d04e8037a58ca86863388b251e20d15b57d3652.4048b5693f516fd4b429d384e716f4bb0d4831de2b6c9ea2c42a86765c5ee4a1.json
./transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.37395cee442ab11005bcd270f3c34464dc1704b715b5d7d52b1a461abe3b9e4e.lock
./transformers/4764ec347af4d2d6286acbe1d9d630ac0afd8554a4c4a64170e0b663fd2e2412.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2.json
./transformers/16b07bde9fc789a1d5bafeeb361edfe9e4df30077f3f8150f33130800dd9fab7.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f
./transformers/b4f8395edd321fd7cd8a87bca767b1135680a41d8931516dd1a447294633b9db.647b4548b6d9ea817e82e7a9231a320231a1c9ea24053cc9e758f3fe68216f05
./transformers/425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
./transformers/540455855ce0a3c13893c5d090d142de9481365bd32dc5457c957e5d13444d23.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.lock
./transformers/afba33be693521ccefbde6d03b93b5c517d7108ba31f6c08000ed52c2cea45c9.28bbf90ae7962b1b7211c0ce8b2006f968c82439ec9c47e0847ba63642f9435a.json
./transformers/c1d7f0a763fb63861cc08553866f1fc3e5a6f4f07621be277452d26d71303b7e.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79.json
./transformers/4764ec347af4d2d6286acbe1d9d630ac0afd8554a4c4a64170e0b663fd2e2412.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2.lock
./transformers/e727ad0b5b727e965ac92d0d987189dd8baca246cc5d9cd2d2991f5bd3a286c5.5fd7d9eb368cd9cb55495ec20862b533efee02e1e074c3bc7bf451b25b4fe59e.json
./transformers/684a47ca6257e4ca71f0037771464c5b323e945fbc58697d2fad8a7dd1a2f8ba.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d.lock
./transformers/16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0.lock
./transformers/81ffd70af12a736e520c197108c70778f231f23ad374bc228dd623abf2ee373b.0afca8ac6cb45f40028b0583daf120fc891de6e9146b0683fbc8556e33714dad
./transformers/375a542f256f8537243b49f47691b6b370e74950f71552629ff41b4025cdc719.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d
./transformers/1ad22be12336f9eec2b9fa372045631e8ffe9e2ca771f6802f88b5b15651f859.c46a0ea4d8cfc938ed324724108be3e06c2fb377cfdbd57ac70f5f589bb03a44.lock
./transformers/198d2773a3a47fe909fd8bf2ab9d40f0c1355d9a45a3ecac510ab2d44390577c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
./transformers/6b6d15ffd3a1fa3015ffff8a9a4a78371fecd1ed1f61aed8a35baf09535240ae.b2f577eb2ce415668e4a3805e4effcc3d81dae1126890ffb69936e7481327494.lock
./transformers/997406d739f356745bd01f90fc8a2ff252ce35e403d6015f2b80fc214fe9387d.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8.json
./transformers/90de37880b5ff5ac7ab70ff0bd369f207e9b74133fa153c163d14c5bb0116207.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529
./transformers/c1d7f0a763fb63861cc08553866f1fc3e5a6f4f07621be277452d26d71303b7e.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79.lock
./transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb.lock
./transformers/6e443a2ed9a4346cca5f4fb9986a60fea956b0f74694596632e5d37302cd2d51.6e9c56f90d0ccc4bb88c2360463bcbd3a5d5688b9ba81e6bcea7316ac803e5ca
./transformers/f548ad4723a1111fd380d466e7291a47148498641c693e4959c3ff05bdcef0e3.13a045cad07359e6844c4f487af8e6323ad2308cac6357692d2359f1a9711443
./transformers/16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0
./transformers/f8eeca194a413b200e1a5bd0e44d9b97e841dab11786978da40771d35dc6dd51.61622627847a3dbefbd551fce83592689111ec347ecce4b9a7ce14d10840be24.lock
./transformers/4e60bb8efad3d4b7dc9969bf204947c185166a0a3cf37ddb6f481a876a3777b5.9f8326d0b7697c7fd57366cdde57032f46bc10e37ae81cb7eb564d66d23ec96b.lock
./transformers/9c38ef325ee9369da1b4b968f92e65ff23befb359d8c51cab821a5a2fd77467e.95aa56f5baa208e6615988f702caba3cff650a3e0fc81149995ccbc168795db4.json
./transformers/41c2fc682e5acee0c74105c9950da8f133eef8879ef0e2e2edd37c4d237da2ee.ffac6e54739b6e6cd3d9e8b6671a9514d3b1b755459a51fdc1749d110e5a5a1d.h5
./transformers/8d04c767d9d4c14d929ce7ad8e067b80c74dbdb212ef4c3fb743db4ee109fae0.9d268a35da669ead745c44d369dc9948b408da5010c6bac414414a7e33d5748c.json
./transformers/83d419fb34e90155a8d95f7799f7a7316a327dc28c7ee6bee15b5a62d3c5ca6b.00628a9eeb8baf4080d44a0abe9fe8057893de20c7cb6e6423cddbf452f7d4d8
./transformers/f8eeca194a413b200e1a5bd0e44d9b97e841dab11786978da40771d35dc6dd51.61622627847a3dbefbd551fce83592689111ec347ecce4b9a7ce14d10840be24.json
./transformers/980f2be6bd282c5079e99199d7554cfd13000433ed0fdc527e7def799e5738fe.4fdc7ce6768977d347b32986aff152e26fcebbda34ef89ac9b114971d0342e09.lock
./transformers/63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0
./transformers/64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
./transformers/1ad22be12336f9eec2b9fa372045631e8ffe9e2ca771f6802f88b5b15651f859.c46a0ea4d8cfc938ed324724108be3e06c2fb377cfdbd57ac70f5f589bb03a44
./transformers/569800088d6f014777e6d5d8cb61b2b8bb3d18a508a1d8af041aae6bbc6f3dfe.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8.lock
./transformers/e6eeef886a597ad9496f7a38414dc332f49fd0e18bc279439f19f6ef80a6830f.150cd75d571e557b7d1dc1a3fd74c0ebe252b855739e47c8040a11a362b2f912
./transformers/c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
./transformers/375a542f256f8537243b49f47691b6b370e74950f71552629ff41b4025cdc719.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d.json
./transformers/e8c98220e9166b448d2e9dfdec05e35b3b68e2c079d80fadfb4dc71e96dee028.852c05acd4c087ec9774e7ed56aeea5010c13056cc8bc37594b75b172416592c.lock
./transformers/c0abea01d3725dc3c06370cced02822e09a715c98c62346f5ec9b730361df18d.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79.lock
./transformers/4e60bb8efad3d4b7dc9969bf204947c185166a0a3cf37ddb6f481a876a3777b5.9f8326d0b7697c7fd57366cdde57032f46bc10e37ae81cb7eb564d66d23ec96b
./transformers/36135304685d914515720daa48fc1adae57803e32ab82d5bde85ef78479e9765.b548f7e307531070391a881374674824b374f829e5d8f68857012de63fe2681a.json
./transformers/19c09c9654551e163f858f3c99c226a8d0026acc4935528df3b09179204efe4c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
./transformers/533d2051a74ea66e9d039bb6c455ef98972c14ecae8a492ec8684cbb236685f9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock
./transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
./transformers/ab70e5f489e00bb2df55e4bae145e9b1c7dc794cfa0fd8228e1299d400613429.f3874c2af5400915dc843c97f502c5d30edc728e5ec3b60c4bd6958e87970f75
./transformers/d44ec0488a5f13d92b3934cb68cc5849bd74ce63ede2eea2bf3c675e1e57297c.627f9558061e7bc67ed0f516b2f7efc1351772cc8553101f08748d44aada8b11.lock
./transformers/980f2be6bd282c5079e99199d7554cfd13000433ed0fdc527e7def799e5738fe.4fdc7ce6768977d347b32986aff152e26fcebbda34ef89ac9b114971d0342e09
./transformers/e8c98220e9166b448d2e9dfdec05e35b3b68e2c079d80fadfb4dc71e96dee028.852c05acd4c087ec9774e7ed56aeea5010c13056cc8bc37594b75b172416592c
./transformers/91e9fe874e06c44883b535d6c950b8b89d6eaa3298d8e7fb3b2c78039e9f8b7b.66b9637a52aa11e9285cdd6e668cc0df14b3bcf0b6674cf3ba5353c542649637.lock
./transformers/e6eeef886a597ad9496f7a38414dc332f49fd0e18bc279439f19f6ef80a6830f.150cd75d571e557b7d1dc1a3fd74c0ebe252b855739e47c8040a11a362b2f912.lock
./transformers/fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51
./transformers/715836a337ea91c1df044351c6041fcac9e268c8836a08c3aae639e8b38b4760.71e50b08dbe7e5375398e165096cacc3d2086119d6a449364490da6908de655e.json
./transformers/3b13d6000bf0faa8f68bbbfabc744100e2abc27c7c8612bf1269bd79fd94fa3d.3df0d73ec7fbb471c0502e9bf5b52515f84d3af812b70f08e7ce8200d268c366.h5.json
./transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f.lock
./transformers/4ac94ea87276ca5a0c5bca5048e2dc4ff34d8c0cc5d48e4205bf5390f7290fd1.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.json
./transformers/748a176e9d151dcad63a27974db8b8f665f286954cfbb77008ca42163419ff66.6a323429db2b09562cffdb9bc72d09d08bccbca1d832434b183b867864c30526.h5
./transformers/6e443a2ed9a4346cca5f4fb9986a60fea956b0f74694596632e5d37302cd2d51.6e9c56f90d0ccc4bb88c2360463bcbd3a5d5688b9ba81e6bcea7316ac803e5ca.lock
./transformers/702389a9cec22f2d79bf3fe49280d2eb5525b574d7a08fa786e30afd16b73de2.f45e1d59b04808261852aa4e0864ba21e35e23fbead10958b80bf4330c93aad2.lock
./transformers/16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0.json
./transformers/55c96bd962ce1d360fde4947619318f1b4eb551430de678044699cbfeb99de6a.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
./transformers/d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
./transformers/4764ec347af4d2d6286acbe1d9d630ac0afd8554a4c4a64170e0b663fd2e2412.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2
./transformers/3b13d6000bf0faa8f68bbbfabc744100e2abc27c7c8612bf1269bd79fd94fa3d.3df0d73ec7fbb471c0502e9bf5b52515f84d3af812b70f08e7ce8200d268c366.h5
./transformers/4ac94ea87276ca5a0c5bca5048e2dc4ff34d8c0cc5d48e4205bf5390f7290fd1.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock
./transformers/afba33be693521ccefbde6d03b93b5c517d7108ba31f6c08000ed52c2cea45c9.28bbf90ae7962b1b7211c0ce8b2006f968c82439ec9c47e0847ba63642f9435a
./transformers/980f2be6bd282c5079e99199d7554cfd13000433ed0fdc527e7def799e5738fe.4fdc7ce6768977d347b32986aff152e26fcebbda34ef89ac9b114971d0342e09.json
./transformers/c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
./transformers/74a3f992bf31343d09735202aa941b8b974c3c50506826429779f938d27705f7.1788df22ba1a6817edb607a56efa931ee13ebad3b3500e58029a8f4e6d799a29.lock
./transformers/e8c98220e9166b448d2e9dfdec05e35b3b68e2c079d80fadfb4dc71e96dee028.852c05acd4c087ec9774e7ed56aeea5010c13056cc8bc37594b75b172416592c.json
./transformers/4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5
./transformers/702389a9cec22f2d79bf3fe49280d2eb5525b574d7a08fa786e30afd16b73de2.f45e1d59b04808261852aa4e0864ba21e35e23fbead10958b80bf4330c93aad2.json
./transformers/55c96bd962ce1d360fde4947619318f1b4eb551430de678044699cbfeb99de6a.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.lock
./transformers/74a3f992bf31343d09735202aa941b8b974c3c50506826429779f938d27705f7.1788df22ba1a6817edb607a56efa931ee13ebad3b3500e58029a8f4e6d799a29.json
./transformers/684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f
./transformers/03dbd2b11eae924dfd97070ed60502df863584957419a604e1c039e0eab3f974.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
./transformers/4e60bb8efad3d4b7dc9969bf204947c185166a0a3cf37ddb6f481a876a3777b5.9f8326d0b7697c7fd57366cdde57032f46bc10e37ae81cb7eb564d66d23ec96b.json
./transformers/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.b589da7dac64196f9764abaf2c4c7e507cec8b14b96da3ef270d924f155062de
./transformers/199ab6c0f28e763098fd3ea09fd68a0928bb297d0f76b9f3375e8a1d652748f9.930264180d256e6fe8e4ba6a728dd80e969493c23d4caa0a6f943614c52d34ab.json
./transformers/d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.json
./transformers/d276a164c3a022c7d3c6887b2e91411b7bf2254df88506ee15510b313956d5fe.9ce994d579bd8ff52a13a561a8e7972d89bd45f20ef49a117c430147ee053da9.lock
./transformers/569800088d6f014777e6d5d8cb61b2b8bb3d18a508a1d8af041aae6bbc6f3dfe.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8.json
./transformers/ab70e5f489e00bb2df55e4bae145e9b1c7dc794cfa0fd8228e1299d400613429.f3874c2af5400915dc843c97f502c5d30edc728e5ec3b60c4bd6958e87970f75.lock
./transformers/533d2051a74ea66e9d039bb6c455ef98972c14ecae8a492ec8684cbb236685f9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
./transformers/35014754ae1fcb956d44903df02e4f69d0917cab0901ace5ac7f4a4a998346fe.a30bb5d685bb3c6e9376ab4480f1b252d9796d438d1c84a9b2deb0275c5b2151
./transformers/199ab6c0f28e763098fd3ea09fd68a0928bb297d0f76b9f3375e8a1d652748f9.930264180d256e6fe8e4ba6a728dd80e969493c23d4caa0a6f943614c52d34ab.lock
./transformers/74a3f992bf31343d09735202aa941b8b974c3c50506826429779f938d27705f7.1788df22ba1a6817edb607a56efa931ee13ebad3b3500e58029a8f4e6d799a29
./transformers/775efbdc2152093295bc5824dee96da82a5f3c1f218dfface1b8cef3094bdf8f.c719a806caef7d36ec0185f14b3b5fa727d919f924abe35622b4b7147bfbb8c7.h5.lock
./transformers/03dbd2b11eae924dfd97070ed60502df863584957419a604e1c039e0eab3f974.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.json
./transformers/0d7d5b3fc19bf58d4b274990c8bcf5e307726bc18d95f40a1436dfb6a0892f85.294ebaa4cd17bb284635004c92d2c4d522ec488c828dcce0c2471b6f28e3fe82
./transformers/997406d739f356745bd01f90fc8a2ff252ce35e403d6015f2b80fc214fe9387d.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8.lock
./transformers/afba33be693521ccefbde6d03b93b5c517d7108ba31f6c08000ed52c2cea45c9.28bbf90ae7962b1b7211c0ce8b2006f968c82439ec9c47e0847ba63642f9435a.lock
./transformers/91e9fe874e06c44883b535d6c950b8b89d6eaa3298d8e7fb3b2c78039e9f8b7b.66b9637a52aa11e9285cdd6e668cc0df14b3bcf0b6674cf3ba5353c542649637
./transformers/03dbd2b11eae924dfd97070ed60502df863584957419a604e1c039e0eab3f974.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock
./transformers/997406d739f356745bd01f90fc8a2ff252ce35e403d6015f2b80fc214fe9387d.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8
./transformers/e727ad0b5b727e965ac92d0d987189dd8baca246cc5d9cd2d2991f5bd3a286c5.5fd7d9eb368cd9cb55495ec20862b533efee02e1e074c3bc7bf451b25b4fe59e.lock
./transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4.lock
./transformers/569800088d6f014777e6d5d8cb61b2b8bb3d18a508a1d8af041aae6bbc6f3dfe.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8
./transformers/066c0238a1dab50404e7d118e7ad1468d20a1fc18c3f2ad1036366759bfc343d.c26bcfbd792a38251a4fb555d9110e87dcc2ecaee13ac0a027d1584df8a09634.lock
./transformers/35014754ae1fcb956d44903df02e4f69d0917cab0901ace5ac7f4a4a998346fe.a30bb5d685bb3c6e9376ab4480f1b252d9796d438d1c84a9b2deb0275c5b2151.json
./transformers/6b6d15ffd3a1fa3015ffff8a9a4a78371fecd1ed1f61aed8a35baf09535240ae.b2f577eb2ce415668e4a3805e4effcc3d81dae1126890ffb69936e7481327494
./transformers/8785a0072d807ebc8a3b6bf5648744bfc3cc83e0e845c40b670d10c0d7827164.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4.lock
./transformers/066c0238a1dab50404e7d118e7ad1468d20a1fc18c3f2ad1036366759bfc343d.c26bcfbd792a38251a4fb555d9110e87dcc2ecaee13ac0a027d1584df8a09634.json
./transformers/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.b589da7dac64196f9764abaf2c4c7e507cec8b14b96da3ef270d924f155062de.json
./transformers/4d8eeedc3498bc73a4b72411ebb3219209b305663632d77a6f16e60790b18038.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab.json
./transformers/d276a164c3a022c7d3c6887b2e91411b7bf2254df88506ee15510b313956d5fe.9ce994d579bd8ff52a13a561a8e7972d89bd45f20ef49a117c430147ee053da9
./transformers/c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
./transformers/d276a164c3a022c7d3c6887b2e91411b7bf2254df88506ee15510b313956d5fe.9ce994d579bd8ff52a13a561a8e7972d89bd45f20ef49a117c430147ee053da9.json
./transformers/198d2773a3a47fe909fd8bf2ab9d40f0c1355d9a45a3ecac510ab2d44390577c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
./transformers/19c09c9654551e163f858f3c99c226a8d0026acc4935528df3b09179204efe4c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
./transformers/36135304685d914515720daa48fc1adae57803e32ab82d5bde85ef78479e9765.b548f7e307531070391a881374674824b374f829e5d8f68857012de63fe2681a
./transformers/533d2051a74ea66e9d039bb6c455ef98972c14ecae8a492ec8684cbb236685f9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.json
./transformers/b4f8395edd321fd7cd8a87bca767b1135680a41d8931516dd1a447294633b9db.647b4548b6d9ea817e82e7a9231a320231a1c9ea24053cc9e758f3fe68216f05.lock
./transformers/35014754ae1fcb956d44903df02e4f69d0917cab0901ace5ac7f4a4a998346fe.a30bb5d685bb3c6e9376ab4480f1b252d9796d438d1c84a9b2deb0275c5b2151.lock
./transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb
./transformers/64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab.lock
./transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f.json
./transformers/748a176e9d151dcad63a27974db8b8f665f286954cfbb77008ca42163419ff66.6a323429db2b09562cffdb9bc72d09d08bccbca1d832434b183b867864c30526.h5.json
./transformers/e584858c24b9c062296d83fd0d04e8037a58ca86863388b251e20d15b57d3652.4048b5693f516fd4b429d384e716f4bb0d4831de2b6c9ea2c42a86765c5ee4a1.lock
./transformers/d0404704aff7a47b8d8a30573cb4f67045bf89101e3200146c2a1a55f182d380.a3dc3058cc957fef449bfe2a4db7cdca4c9b0f7c0b2a9c4bc6228ba024621a78.h5.lock
./transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb.json
./transformers/41c2fc682e5acee0c74105c9950da8f133eef8879ef0e2e2edd37c4d237da2ee.ffac6e54739b6e6cd3d9e8b6671a9514d3b1b755459a51fdc1749d110e5a5a1d.h5.json
./transformers/ab70e5f489e00bb2df55e4bae145e9b1c7dc794cfa0fd8228e1299d400613429.f3874c2af5400915dc843c97f502c5d30edc728e5ec3b60c4bd6958e87970f75.json
./transformers/e35579e8a88906e94c27c62a44b4ed91aad2f30aace4ddbb72537133beee8046.0f4e7e01b1ce2b178aebfb2722a31f84570d00b96726ed9db0caed2c0856089d
./transformers/d0404704aff7a47b8d8a30573cb4f67045bf89101e3200146c2a1a55f182d380.a3dc3058cc957fef449bfe2a4db7cdca4c9b0f7c0b2a9c4bc6228ba024621a78.h5.json
./transformers/425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
./transformers/63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0.lock
./transformers/90de37880b5ff5ac7ab70ff0bd369f207e9b74133fa153c163d14c5bb0116207.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529.lock
./transformers/c1d7f0a763fb63861cc08553866f1fc3e5a6f4f07621be277452d26d71303b7e.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
./transformers/8785a0072d807ebc8a3b6bf5648744bfc3cc83e0e845c40b670d10c0d7827164.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
./transformers/199ab6c0f28e763098fd3ea09fd68a0928bb297d0f76b9f3375e8a1d652748f9.930264180d256e6fe8e4ba6a728dd80e969493c23d4caa0a6f943614c52d34ab
./transformers/90de37880b5ff5ac7ab70ff0bd369f207e9b74133fa153c163d14c5bb0116207.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529.json
./transformers/45c3f7a79a80e1cf0a489e5c62b43f173c15db47864303a55d623bb3c96f72a5.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.json
./transformers/684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.json
./transformers/b4f8395edd321fd7cd8a87bca767b1135680a41d8931516dd1a447294633b9db.647b4548b6d9ea817e82e7a9231a320231a1c9ea24053cc9e758f3fe68216f05.json
./transformers/45c3f7a79a80e1cf0a489e5c62b43f173c15db47864303a55d623bb3c96f72a5.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock
./transformers/4d8eeedc3498bc73a4b72411ebb3219209b305663632d77a6f16e60790b18038.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab.lock
./transformers/e1881a496d5b707363a530f017ae73140e9ce35e240c7fef5b6835a26bd20492.f19e829a37b1b5e2490c86b2233b4c0af113615667600e558758f314027f668e
./transformers/4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5.lock
./transformers/540455855ce0a3c13893c5d090d142de9481365bd32dc5457c957e5d13444d23.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
./transformers/198d2773a3a47fe909fd8bf2ab9d40f0c1355d9a45a3ecac510ab2d44390577c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
./transformers/d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.lock
./transformers/0d7d5b3fc19bf58d4b274990c8bcf5e307726bc18d95f40a1436dfb6a0892f85.294ebaa4cd17bb284635004c92d2c4d522ec488c828dcce0c2471b6f28e3fe82.json
./transformers/8d04c767d9d4c14d929ce7ad8e067b80c74dbdb212ef4c3fb743db4ee109fae0.9d268a35da669ead745c44d369dc9948b408da5010c6bac414414a7e33d5748c.lock
./transformers/83261b0c74c462e53d6367de0646b1fca07d0f15f1be045156b9cf8c71279cc9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
./transformers/684a47ca6257e4ca71f0037771464c5b323e945fbc58697d2fad8a7dd1a2f8ba.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d.json
./transformers/fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51.json
./transformers/540455855ce0a3c13893c5d090d142de9481365bd32dc5457c957e5d13444d23.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.json
./transformers/715836a337ea91c1df044351c6041fcac9e268c8836a08c3aae639e8b38b4760.71e50b08dbe7e5375398e165096cacc3d2086119d6a449364490da6908de655e
./transformers/83d419fb34e90155a8d95f7799f7a7316a327dc28c7ee6bee15b5a62d3c5ca6b.00628a9eeb8baf4080d44a0abe9fe8057893de20c7cb6e6423cddbf452f7d4d8.json
./transformers/f548ad4723a1111fd380d466e7291a47148498641c693e4959c3ff05bdcef0e3.13a045cad07359e6844c4f487af8e6323ad2308cac6357692d2359f1a9711443.json
./transformers/d44ec0488a5f13d92b3934cb68cc5849bd74ce63ede2eea2bf3c675e1e57297c.627f9558061e7bc67ed0f516b2f7efc1351772cc8553101f08748d44aada8b11
./transformers/63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0.json
./transformers/16b07bde9fc789a1d5bafeeb361edfe9e4df30077f3f8150f33130800dd9fab7.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.lock
./transformers/e1881a496d5b707363a530f017ae73140e9ce35e240c7fef5b6835a26bd20492.f19e829a37b1b5e2490c86b2233b4c0af113615667600e558758f314027f668e.lock
./transformers/e584858c24b9c062296d83fd0d04e8037a58ca86863388b251e20d15b57d3652.4048b5693f516fd4b429d384e716f4bb0d4831de2b6c9ea2c42a86765c5ee4a1
./transformers/6b6d15ffd3a1fa3015ffff8a9a4a78371fecd1ed1f61aed8a35baf09535240ae.b2f577eb2ce415668e4a3805e4effcc3d81dae1126890ffb69936e7481327494.json
./transformers/d44ec0488a5f13d92b3934cb68cc5849bd74ce63ede2eea2bf3c675e1e57297c.627f9558061e7bc67ed0f516b2f7efc1351772cc8553101f08748d44aada8b11.json
./transformers/8d04c767d9d4c14d929ce7ad8e067b80c74dbdb212ef4c3fb743db4ee109fae0.9d268a35da669ead745c44d369dc9948b408da5010c6bac414414a7e33d5748c
./transformers/066c0238a1dab50404e7d118e7ad1468d20a1fc18c3f2ad1036366759bfc343d.c26bcfbd792a38251a4fb555d9110e87dcc2ecaee13ac0a027d1584df8a09634
./transformers/4d8eeedc3498bc73a4b72411ebb3219209b305663632d77a6f16e60790b18038.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
./transformers/e35579e8a88906e94c27c62a44b4ed91aad2f30aace4ddbb72537133beee8046.0f4e7e01b1ce2b178aebfb2722a31f84570d00b96726ed9db0caed2c0856089d.json
./transformers/81ffd70af12a736e520c197108c70778f231f23ad374bc228dd623abf2ee373b.0afca8ac6cb45f40028b0583daf120fc891de6e9146b0683fbc8556e33714dad.lock
./transformers/36135304685d914515720daa48fc1adae57803e32ab82d5bde85ef78479e9765.b548f7e307531070391a881374674824b374f829e5d8f68857012de63fe2681a.lock
./transformers/55c96bd962ce1d360fde4947619318f1b4eb551430de678044699cbfeb99de6a.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.json
./transformers/9c38ef325ee9369da1b4b968f92e65ff23befb359d8c51cab821a5a2fd77467e.95aa56f5baa208e6615988f702caba3cff650a3e0fc81149995ccbc168795db4.lock
./transformers/c0abea01d3725dc3c06370cced02822e09a715c98c62346f5ec9b730361df18d.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79.json
./transformers/1ad22be12336f9eec2b9fa372045631e8ffe9e2ca771f6802f88b5b15651f859.c46a0ea4d8cfc938ed324724108be3e06c2fb377cfdbd57ac70f5f589bb03a44.json
./transformers/715836a337ea91c1df044351c6041fcac9e268c8836a08c3aae639e8b38b4760.71e50b08dbe7e5375398e165096cacc3d2086119d6a449364490da6908de655e.lock
./transformers/775efbdc2152093295bc5824dee96da82a5f3c1f218dfface1b8cef3094bdf8f.c719a806caef7d36ec0185f14b3b5fa727d919f924abe35622b4b7147bfbb8c7.h5.json
./transformers/0ddddd3ca9e107b17a6901c92543692272af1c3238a8d7549fa937ba0057bbcf.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
./transformers/425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
./transformers/684a47ca6257e4ca71f0037771464c5b323e945fbc58697d2fad8a7dd1a2f8ba.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d
./transformers/e1881a496d5b707363a530f017ae73140e9ce35e240c7fef5b6835a26bd20492.f19e829a37b1b5e2490c86b2233b4c0af113615667600e558758f314027f668e.json
./transformers/f8eeca194a413b200e1a5bd0e44d9b97e841dab11786978da40771d35dc6dd51.61622627847a3dbefbd551fce83592689111ec347ecce4b9a7ce14d10840be24
./transformers/702389a9cec22f2d79bf3fe49280d2eb5525b574d7a08fa786e30afd16b73de2.f45e1d59b04808261852aa4e0864ba21e35e23fbead10958b80bf4330c93aad2
./transformers/e35579e8a88906e94c27c62a44b4ed91aad2f30aace4ddbb72537133beee8046.0f4e7e01b1ce2b178aebfb2722a31f84570d00b96726ed9db0caed2c0856089d.lock
./transformers/f548ad4723a1111fd380d466e7291a47148498641c693e4959c3ff05bdcef0e3.13a045cad07359e6844c4f487af8e6323ad2308cac6357692d2359f1a9711443.lock
./transformers/83d419fb34e90155a8d95f7799f7a7316a327dc28c7ee6bee15b5a62d3c5ca6b.00628a9eeb8baf4080d44a0abe9fe8057893de20c7cb6e6423cddbf452f7d4d8.lock
./transformers/64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab.json
./transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.37395cee442ab11005bcd270f3c34464dc1704b715b5d7d52b1a461abe3b9e4e
./transformers/0ddddd3ca9e107b17a6901c92543692272af1c3238a8d7549fa937ba0057bbcf.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
./transformers/8785a0072d807ebc8a3b6bf5648744bfc3cc83e0e845c40b670d10c0d7827164.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4.json
./transformers/0d7d5b3fc19bf58d4b274990c8bcf5e307726bc18d95f40a1436dfb6a0892f85.294ebaa4cd17bb284635004c92d2c4d522ec488c828dcce0c2471b6f28e3fe82.lock
./transformers/83261b0c74c462e53d6367de0646b1fca07d0f15f1be045156b9cf8c71279cc9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.json
./transformers/4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5.json
./transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4.json
./transformers/19c09c9654551e163f858f3c99c226a8d0026acc4935528df3b09179204efe4c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
./transformers/fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51.lock
./transformers/9c38ef325ee9369da1b4b968f92e65ff23befb359d8c51cab821a5a2fd77467e.95aa56f5baa208e6615988f702caba3cff650a3e0fc81149995ccbc168795db4
./transformers/0ddddd3ca9e107b17a6901c92543692272af1c3238a8d7549fa937ba0057bbcf.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
./transformers/684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.lock
./transformers/81ffd70af12a736e520c197108c70778f231f23ad374bc228dd623abf2ee373b.0afca8ac6cb45f40028b0583daf120fc891de6e9146b0683fbc8556e33714dad.json
```
</details>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Use a Linux user account with previous transformers cache from 4.19.2
2. `git clone git@github.com:openai/whisper.git && cd whisper`
3. `python3 -m venv venv`
4. `. venv/bin/activate`
5. `python3 -m pip install -e .`
6. `venv/bin/whisper`
```
home@daniel-tablet1:~/PycharmProjects$ git clone git@github.com:openai/whisper.git
Cloning into 'whisper'...
Enter passphrase for key '/home/home/.ssh/id_ed25519':
remote: Enumerating objects: 192, done.
remote: Counting objects: 100% (82/82), done.
remote: Compressing objects: 100% (15/15), done.
remote: Total 192 (delta 73), reused 68 (delta 67), pack-reused 110
Receiving objects: 100% (192/192), 3.10 MiB | 13.97 MiB/s, done.
Resolving deltas: 100% (101/101), done.
home@daniel-tablet1:~/PycharmProjects$ cd whisper/
home@daniel-tablet1:~/PycharmProjects/whisper$ python3 -m venv venv
home@daniel-tablet1:~/PycharmProjects/whisper$ . venv/bin/activate
(venv) home@daniel-tablet1:~/PycharmProjects/whisper$ python3 -m pip install -e .
Obtaining file:///home/home/PycharmProjects/whisper
Preparing metadata (setup.py) ... done
Collecting ffmpeg-python==0.2.0
Downloading ffmpeg_python-0.2.0-py3-none-any.whl (25 kB)
Collecting more-itertools
Downloading more_itertools-8.14.0-py3-none-any.whl (52 kB)
ββββββββββββββββββββββββββββββββββββββββ 52.2/52.2 kB 10.3 MB/s eta 0:00:00
Collecting numpy
Downloading numpy-1.23.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB)
ββββββββββββββββββββββββββββββββββββββββ 17.1/17.1 MB 49.0 MB/s eta 0:00:00
Collecting torch
Downloading torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl (776.3 MB)
ββββββββββββββββββββββββββββββββββββββββ 776.3/776.3 MB 4.5 MB/s eta 0:00:00
Collecting tqdm
Downloading tqdm-4.64.1-py2.py3-none-any.whl (78 kB)
ββββββββββββββββββββββββββββββββββββββββ 78.5/78.5 kB 26.0 MB/s eta 0:00:00
Collecting transformers>=4.19.0
Downloading transformers-4.22.2-py3-none-any.whl (4.9 MB)
ββββββββββββββββββββββββββββββββββββββββ 4.9/4.9 MB 45.9 MB/s eta 0:00:00
Collecting future
Downloading future-0.18.2.tar.gz (829 kB)
ββββββββββββββββββββββββββββββββββββββββ 829.2/829.2 kB 62.7 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting requests
Using cached requests-2.28.1-py3-none-any.whl (62 kB)
Collecting regex!=2019.12.17
Downloading regex-2022.9.13-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (770 kB)
ββββββββββββββββββββββββββββββββββββββββ 770.5/770.5 kB 48.3 MB/s eta 0:00:00
Collecting pyyaml>=5.1
Using cached PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (682 kB)
Collecting filelock
Downloading filelock-3.8.0-py3-none-any.whl (10 kB)
Collecting packaging>=20.0
Using cached packaging-21.3-py3-none-any.whl (40 kB)
Collecting huggingface-hub<1.0,>=0.9.0
Downloading huggingface_hub-0.10.0-py3-none-any.whl (163 kB)
ββββββββββββββββββββββββββββββββββββββββ 163.5/163.5 kB 63.2 MB/s eta 0:00:00
Collecting tokenizers!=0.11.3,<0.13,>=0.11.1
Using cached tokenizers-0.12.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (6.6 MB)
Collecting typing-extensions
Downloading typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2022.9.24-py3-none-any.whl (161 kB)
Collecting idna<4,>=2.5
Downloading idna-3.4-py3-none-any.whl (61 kB)
ββββββββββββββββββββββββββββββββββββββββ 61.5/61.5 kB 23.6 MB/s eta 0:00:00
Collecting charset-normalizer<3,>=2
Downloading charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting urllib3<1.27,>=1.21.1
Downloading urllib3-1.26.12-py2.py3-none-any.whl (140 kB)
ββββββββββββββββββββββββββββββββββββββββ 140.4/140.4 kB 54.7 MB/s eta 0:00:00
Using legacy 'setup.py install' for future, since package 'wheel' is not installed.
Installing collected packages: tokenizers, urllib3, typing-extensions, tqdm, regex, pyyaml, pyparsing, numpy, more-itertools, idna, future, filelock, charset-normalizer, certifi, torch, requests, packaging, ffmpeg-python, huggingface-hub, transformers, whisper
Running setup.py install for future ... done
Running setup.py develop for whisper
Successfully installed certifi-2022.9.24 charset-normalizer-2.1.1 ffmpeg-python-0.2.0 filelock-3.8.0 future-0.18.2 huggingface-hub-0.10.0 idna-3.4 more-itertools-8.14.0 numpy-1.23.3 packaging-21.3 pyparsing-3.0.9 pyyaml-6.0 regex-2022.9.13 requests-2.28.1 tokenizers-0.12.1 torch-1.12.1 tqdm-4.64.1 transformers-4.22.2 typing-extensions-4.4.0 urllib3-1.26.12 whisper-1.0
(venv) home@daniel-tablet1:~/PycharmProjects/whisper$ venv/bin/whisper
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 71 files to the new cache system
0%| | 0/71 [00:00<?, ?it/s]
There was a problem when trying to move your cache:
File "/home/home/PycharmProjects/whisper/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 1128, in <module>
move_cache()
File "/home/home/PycharmProjects/whisper/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 1071, in move_cache
hub_metadata[url] = get_hub_metadata(url, token=token)
File "/home/home/PycharmProjects/whisper/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 996, in get_hub_metadata
huggingface_hub.file_download._raise_for_status(r)
AttributeError: module 'huggingface_hub.file_download' has no attribute '_raise_for_status'
Please file an issue at https://github.com/huggingface/transformers/issues/new/choose and copy paste this whole message and we will do our best to help.
usage: whisper [-h] [--model {tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large}] [--model_dir MODEL_DIR] [--device DEVICE] [--output_dir OUTPUT_DIR]
[--verbose VERBOSE] [--task {transcribe,translate}]
[--language {af,am,ar,as,az,ba,be,bg,bn,bo,br,bs,ca,cs,cy,da,de,el,en,es,et,eu,fa,fi,fo,fr,gl,gu,ha,haw,hi,hr,ht,hu,hy,id,is,it,iw,ja,jw,ka,kk,km,kn,ko,la,lb,ln,lo,lt,lv,mg,mi,mk,ml,mn,mr,ms,mt,my,ne,nl,nn,no,oc,pa,pl,ps,pt,ro,ru,sa,sd,si,sk,sl,sn,so,sq,sr,su,sv,sw,ta,te,tg,th,tk,tl,tr,tt,uk,ur,uz,vi,yi,yo,zh,Afrikaans,Albanian,Amharic,Arabic,Armenian,Assamese,Azerbaijani,Bashkir,Basque,Belarusian,Bengali,Bosnian,Breton,Bulgarian,Burmese,Castilian,Catalan,Chinese,Croatian,Czech,Danish,Dutch,English,Estonian,Faroese,Finnish,Flemish,French,Galician,Georgian,German,Greek,Gujarati,Haitian,Haitian Creole,Hausa,Hawaiian,Hebrew,Hindi,Hungarian,Icelandic,Indonesian,Italian,Japanese,Javanese,Kannada,Kazakh,Khmer,Korean,Lao,Latin,Latvian,Letzeburgesch,Lingala,Lithuanian,Luxembourgish,Macedonian,Malagasy,Malay,Malayalam,Maltese,Maori,Marathi,Moldavian,Moldovan,Mongolian,Myanmar,Nepali,Norwegian,Nynorsk,Occitan,Panjabi,Pashto,Persian,Polish,Portuguese,Punjabi,Pushto,Romanian,Russian,Sanskrit,Serbian,Shona,Sindhi,Sinhala,Sinhalese,Slovak,Slovenian,Somali,Spanish,Sundanese,Swahili,Swedish,Tagalog,Tajik,Tamil,Tatar,Telugu,Thai,Tibetan,Turkish,Turkmen,Ukrainian,Urdu,Uzbek,Valencian,Vietnamese,Welsh,Yiddish,Yoruba}]
[--temperature TEMPERATURE] [--best_of BEST_OF] [--beam_size BEAM_SIZE] [--patience PATIENCE] [--length_penalty LENGTH_PENALTY]
[--suppress_tokens SUPPRESS_TOKENS] [--initial_prompt INITIAL_PROMPT] [--condition_on_previous_text CONDITION_ON_PREVIOUS_TEXT] [--fp16 FP16]
[--temperature_increment_on_fallback TEMPERATURE_INCREMENT_ON_FALLBACK] [--compression_ratio_threshold COMPRESSION_RATIO_THRESHOLD]
[--logprob_threshold LOGPROB_THRESHOLD] [--no_speech_threshold NO_SPEECH_THRESHOLD]
audio [audio ...]
whisper: error: the following arguments are required: audio
```
### Expected behavior
It should not print the stack trace and need to tell me to "copy paste this whole message and we will do our best to help".
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19419/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19418
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19418/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19418/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19418/events
|
https://github.com/huggingface/transformers/issues/19418
| 1,401,604,130
|
I_kwDOCUB6oc5Tisgi
| 19,418
|
T5ForConditionalGeneration checkpoint size mismatch
|
{
"login": "msamogh",
"id": 1230386,
"node_id": "MDQ6VXNlcjEyMzAzODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1230386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/msamogh",
"html_url": "https://github.com/msamogh",
"followers_url": "https://api.github.com/users/msamogh/followers",
"following_url": "https://api.github.com/users/msamogh/following{/other_user}",
"gists_url": "https://api.github.com/users/msamogh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/msamogh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msamogh/subscriptions",
"organizations_url": "https://api.github.com/users/msamogh/orgs",
"repos_url": "https://api.github.com/users/msamogh/repos",
"events_url": "https://api.github.com/users/msamogh/events{/privacy}",
"received_events_url": "https://api.github.com/users/msamogh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @msamogh π \r\n\r\nTo explain why there is a mismatch, we would need to know exactly how the model was trained :) However, the most important part -- you may be able to load the checkpoint with these two strategies:\r\n1. Load the model architecture from the same configuration as your trained model\r\n2. After initializing the model architecture (and before loading the checkpoint), [resize the embeddings](https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/model#transformers.PreTrainedModel.resize_token_embeddings)\r\n\r\nBoth strategies should change the shape of your architecture to match your checkpoint",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @gante, does interpolate_pos_embedding type functions do what you mentioned in point 2?\r\n\r\nHere is a snippet:\r\n\r\n```\r\ndef interpolate_pos_embed_multimae(model, checkpoint_model):\r\n pattern = \"input_adapters\\.(.*)\\.pos_emb\"\r\n matched_keys = [k for k in checkpoint_model if bool(re.match(pattern, k))]\r\n\r\n for key in matched_keys:\r\n domain = re.match(pattern, key).group(1) # group(0) is entire matched regex\r\n if getattr(model.input_adapters, domain, None) is not None:\r\n pos_embed_checkpoint = checkpoint_model[key]\r\n _, _, orig_H, orig_W = pos_embed_checkpoint.shape\r\n _, _, new_H, new_W = getattr(model.input_adapters, domain).pos_emb.shape\r\n if (orig_H != new_H) or (orig_W != new_W):\r\n print(f\"Key {key}: Position interpolate from {orig_H}x{orig_W} to {new_H}x{new_W}\")\r\n pos_embed_checkpoint = torch.nn.functional.interpolate(\r\n pos_embed_checkpoint, size=(new_H, new_W), mode='bicubic', align_corners=False)\r\n checkpoint_model[key] = pos_embed_checkpoint\r\n\r\n```\r\n\r\n",
"Hey @forkbabu π I do not know the answer to your question. However, from your code snippet, it seems like you are working with a vision model -- my recommendation would be to open a new issue and tag one of our vision experts ",
"Usually you don't encounter any problems when loading the model for which you've added some extra tokens during the training. In my case, it was the `pad_to_multiple_of` parameter that caused the trouble. It is claimed to do some Nvidia magic for a more efficient utilization of modern GPUs, so I used it when I created the model for training and then happily forgot about it: \r\n```\r\nmodel.resize_token_embeddings(len(tokenizer), pad_to_multiple_of=16)\r\n```\r\n\r\nBut as it seems, the current API (4.33.0.dev0) struggles to load such models. The workaround would be:\r\n```\r\nMODEL_CHECKPOINT = '' # your directory here\r\nconfig_path = path.join(MODEL_CHECKPOINT, 'config.json')\r\nweights_path = path.join(MODEL_CHECKPOINT, 'pytorch_model.bin')\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(MODEL_CHECKPOINT)\r\n\r\nconfig = AutoConfig.from_pretrained(config_path)\r\nmodel = T5ForConditionalGeneration(config) \r\nmodel.resize_token_embeddings(len(tokenizer), pad_to_multiple_of=16)\r\nmodel.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu')))\r\n```\r\nWhich outputs: \\<All keys matched successfully>",
"Yep, the vocab size is missing some changes, fix is here: #25732",
"@gante , the fix that @ArthurZucker started in #25732 looks like a good one, and it addresses this error that I bet more people are going to run into, now that there is a warning when training that encourages us to pad to a multiple of 8.",
"Here are some helper functions I wrote to correctly resize embeddings for a model (before this bug is fixed), and repair broken models that were saved with this incorrect vocab size. (I might have typed things too narrowly without realizing it.)\r\n\r\nThis is tiding me over until #25732 or an equivalent is merged.\r\n\r\n```python\r\ndef resize_model_token_embeddings_correctly(model: PreTrainedModel, \r\n new_num_tokens: Optional[int] = None, \r\n pad_to_multiple_of: Optional[int] = None)-> nn.Embedding:\r\n \"\"\"This is a workaround for the bug: https://github.com/huggingface/transformers/issues/25729 that doesn't save embedding vocab sizes correctly\"\"\"\r\n vocab_size_before:int = model.get_input_embeddings().weight.shape[0]\r\n model_embeds: nn.Embedding = model.resize_token_embeddings(new_num_tokens, pad_to_multiple_of=pad_to_multiple_of)\r\n vocab_size_after:int = model.get_input_embeddings().weight.shape[0]\r\n # This appears to be the correct way to resize the embeddings since the pad_to_multiple_of is broken in the current version of Transformers:\r\n # This has been fixed in this PR: https://github.com/huggingface/transformers/pull/25732/files#diff-6b72b98c4c2dcfc6cc606843917733f5d858374fbc22a735ff483bbc0c1e63ea\r\n # But in the meantime, we do this part manually: (Most of this is copied from the resize_token_embeddings method or that PR)\r\n\r\n # You might think we need to early-out in case they didn't pass new_num_tokens, but the number of tokens might _still_ change\r\n # if they passed a pad_to_multiple_of in, so we do this to be safe for now.\r\n if vocab_size_before == vocab_size_after:\r\n return model_embeds\r\n \r\n model.config.vocab_size = vocab_size_after\r\n model.vocab_size = vocab_size_after\r\n model.tie_weights()\r\n return model_embeds\r\n\r\ndef fix_model_folder_with_incorrect_vocab_size(model_folder: Path, pad_vocab_size_to_multiple_of: int=64):\r\n \"\"\"If you got bitten by the bug https://github.com/huggingface/transformers/issues/25729, and your model got saved with an incorrect vocab size, this will fix it.\"\"\"\r\n config_file = model_folder / \"config.json\"\r\n config = AutoConfig.from_pretrained(config_file)\r\n vocab_size:int = config.vocab_size\r\n # is vocab_size an int, and is it bigger than 0?\r\n assert isinstance(vocab_size, int) and vocab_size > 0\r\n \r\n if vocab_size % pad_vocab_size_to_multiple_of != 0:\r\n print(f\"vocab_size ({vocab_size}) is not a multiple of {pad_vocab_size_to_multiple_of}. Fixing it...\")\r\n # fix the vocab_size:\r\n config.vocab_size = vocab_size + (pad_vocab_size_to_multiple_of - vocab_size % pad_vocab_size_to_multiple_of)\r\n # save the config file back to the checkpoint folder:\r\n config.save_pretrained(model_folder)\r\n print(f\"Fixed the vocab_size to {config.vocab_size}\")\r\n\r\ndef load_model_with_missized_vocab_size(model_folder: Path, pad_vocab_size_to_multiple_of: int=64) -> PreTrainedModel:\r\n \"\"\"There is a bug in how a model was getting saved, where the vocab size was not set to a multiple of a given padding, but it should have been.\"\"\"\r\n fix_model_folder_with_incorrect_vocab_size(model_folder, pad_vocab_size_to_multiple_of)\r\n model:PreTrainedModel = AutoModelForSeq2SeqLM.from_pretrained(model_folder)\r\n return model\r\n ```"
] | 1,665
| 1,694
| 1,668
|
NONE
| null |
### System Info
## Error Description
I trained a `T5ForConditionalGeneration` model and saved the checkpoint using PyTorch Lightning's Trainer to a `.ckpt` file. But when I try to load back the state_dict using `model.from_state_dict()`, I get this error:
```python
RuntimeError: Error(s) in loading state_dict for T5ForConditionalGeneration:
Unexpected key(s) in state_dict: "decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight".
size mismatch for shared.weight: copying a param with shape torch.Size([32103, 512]) from checkpoint, the shape in current model is torch.Size([32128, 512]).
size mismatch for encoder.embed_tokens.weight: copying a param with shape torch.Size([32103, 512]) from checkpoint, the shape in current model is torch.Size([32128, 512]).
size mismatch for decoder.embed_tokens.weight: copying a param with shape torch.Size([32103, 512]) from checkpoint, the shape in current model is torch.Size([32128, 512]).
size mismatch for lm_head.weight: copying a param with shape torch.Size([32103, 512]) from checkpoint, the shape in current model is torch.Size([32128, 512]).
```
I have not changed the model definition in any way. The keys also match. So, I'm really not sure how the sizes could mismatch magically when loading?
## Loading the model
This is how I'm loading the model:
```python
tokenizer = T5Tokenizer.from_pretrained(args["model_checkpoint"], bos_token="[bos]", eos_token="[eos]", sep_token="[sep]")
model = T5ForConditionalGeneration.from_pretrained(args["model_checkpoint"], ignore_mismatched_sizes=True)
model.load_state_dict({k[6:]: v for k, v in ckpt["state_dict"].items()})
```
I even tried to pass `ignore_mismatched_sizes=True` to the `from_pretrained` call, and that didn't help either.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
As described above.
### Expected behavior
No error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19418/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19417
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19417/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19417/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19417/events
|
https://github.com/huggingface/transformers/pull/19417
| 1,401,583,868
|
PR_kwDOCUB6oc5AanNo
| 19,417
|
Make `MobileBert` tokenizers independent from `Bert`
|
{
"login": "501Good",
"id": 10570950,
"node_id": "MDQ6VXNlcjEwNTcwOTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/10570950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/501Good",
"html_url": "https://github.com/501Good",
"followers_url": "https://api.github.com/users/501Good/followers",
"following_url": "https://api.github.com/users/501Good/following{/other_user}",
"gists_url": "https://api.github.com/users/501Good/gists{/gist_id}",
"starred_url": "https://api.github.com/users/501Good/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/501Good/subscriptions",
"organizations_url": "https://api.github.com/users/501Good/orgs",
"repos_url": "https://api.github.com/users/501Good/repos",
"events_url": "https://api.github.com/users/501Good/events{/privacy}",
"received_events_url": "https://api.github.com/users/501Good/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19417). All of your documentation changes will be reflected on that endpoint.",
"Hi @501Good, as you can see, your rebase has messed the diff on Git a little. Could you open a fresh PR from your branch?",
"Hi @sgugger, sorry for that! Opened a new PR here #19531! "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Copied the code from `Bert` tokenizers into `MobileBert` tokenizers to make the latter self-contained.
Fixes #19303
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19417/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19417",
"html_url": "https://github.com/huggingface/transformers/pull/19417",
"diff_url": "https://github.com/huggingface/transformers/pull/19417.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19417.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19416
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19416/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19416/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19416/events
|
https://github.com/huggingface/transformers/pull/19416
| 1,401,564,131
|
PR_kwDOCUB6oc5AajAE
| 19,416
|
Wrap TAPAS integration test forward passes with torch.no_grad()
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR wraps forward passes in TAPAS integration tests with `torch.no_grad()`, as proposed in issue #14642. This avoids the computation of unnecessary gradients during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please check it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19416/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19416",
"html_url": "https://github.com/huggingface/transformers/pull/19416",
"diff_url": "https://github.com/huggingface/transformers/pull/19416.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19416.patch",
"merged_at": 1665428589000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19415
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19415/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19415/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19415/events
|
https://github.com/huggingface/transformers/pull/19415
| 1,401,555,447
|
PR_kwDOCUB6oc5AahJ6
| 19,415
|
fix misspelled word in ensure_valid_input docstring
|
{
"login": "Bearnardd",
"id": 43574448,
"node_id": "MDQ6VXNlcjQzNTc0NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/43574448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bearnardd",
"html_url": "https://github.com/Bearnardd",
"followers_url": "https://api.github.com/users/Bearnardd/followers",
"following_url": "https://api.github.com/users/Bearnardd/following{/other_user}",
"gists_url": "https://api.github.com/users/Bearnardd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bearnardd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bearnardd/subscriptions",
"organizations_url": "https://api.github.com/users/Bearnardd/orgs",
"repos_url": "https://api.github.com/users/Bearnardd/repos",
"events_url": "https://api.github.com/users/Bearnardd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bearnardd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
This PR fixes misspelled docstring for `ensure_valid_input` function in `convert_graph_to_onnx.py`.
Fixes https://github.com/huggingface/transformers/issues/19362
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19415/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19415",
"html_url": "https://github.com/huggingface/transformers/pull/19415",
"diff_url": "https://github.com/huggingface/transformers/pull/19415.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19415.patch",
"merged_at": 1665419637000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19414
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19414/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19414/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19414/events
|
https://github.com/huggingface/transformers/pull/19414
| 1,401,551,475
|
PR_kwDOCUB6oc5AagUV
| 19,414
|
Wrap ImageGPT integration test forward passes with torch.no_grad()
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in ImageGPT integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19414/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19414",
"html_url": "https://github.com/huggingface/transformers/pull/19414",
"diff_url": "https://github.com/huggingface/transformers/pull/19414.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19414.patch",
"merged_at": 1665428604000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19413
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19413/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19413/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19413/events
|
https://github.com/huggingface/transformers/pull/19413
| 1,401,546,020
|
PR_kwDOCUB6oc5AafJI
| 19,413
|
Wrap FNet integration test forward passes with torch.no_grad()
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in FNet integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19413/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19413",
"html_url": "https://github.com/huggingface/transformers/pull/19413",
"diff_url": "https://github.com/huggingface/transformers/pull/19413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19413.patch",
"merged_at": 1665428627000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19412
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19412/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19412/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19412/events
|
https://github.com/huggingface/transformers/pull/19412
| 1,401,540,029
|
PR_kwDOCUB6oc5Aad1a
| 19,412
|
Wrap FlauBERT integration test forward passes with torch.no_grad()
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR wraps forward passes in FlauBERT integration tests with `torch.no_grad()`, as proposed in issue #14642. This avoids the computation of unnecessary gradients during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19412/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19412",
"html_url": "https://github.com/huggingface/transformers/pull/19412",
"diff_url": "https://github.com/huggingface/transformers/pull/19412.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19412.patch",
"merged_at": 1665428650000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19411
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19411/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19411/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19411/events
|
https://github.com/huggingface/transformers/pull/19411
| 1,401,406,056
|
PR_kwDOCUB6oc5AaBfO
| 19,411
|
Remove dependency of Roberta in Blenderbot
|
{
"login": "rchan26",
"id": 44200705,
"node_id": "MDQ6VXNlcjQ0MjAwNzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/44200705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rchan26",
"html_url": "https://github.com/rchan26",
"followers_url": "https://api.github.com/users/rchan26/followers",
"following_url": "https://api.github.com/users/rchan26/following{/other_user}",
"gists_url": "https://api.github.com/users/rchan26/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rchan26/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rchan26/subscriptions",
"organizations_url": "https://api.github.com/users/rchan26/orgs",
"repos_url": "https://api.github.com/users/rchan26/repos",
"events_url": "https://api.github.com/users/rchan26/events{/privacy}",
"received_events_url": "https://api.github.com/users/rchan26/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sgugger, I have now removed the global ` # Copied from` statements, and replaced them with Copied from statements to the individual methods that I am copying into the Blenderbot classes. This has resolved my earlier problem and now `pytest tests/models/blenderbot/test_tokenization_blenderbot.py` runs without error.\r\n\r\nCurrently, my PR now fails `python utils/check_copies.py` as there are two methods named `mask_token` in the `RobertaTokenizerFast`. This means that I currently have two `# Copied from transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast.mask_token` statements and so there's a matching problem.\r\n\r\nHow should I deal with the case where there are two methods with the same name?",
"I've just seen that someone had a similar issue with copying over a method with a setter: https://github.com/huggingface/transformers/pull/19408#pullrequestreview-1134731877.\r\n\r\nI have now followed the advice on this PR and have removed my Copied from statement on the setter for `mask_token`. Seems like all tests pass now π"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Hi @sgugger,
This PR looks to address https://github.com/huggingface/transformers/issues/19303: the RobertaTokenizer dependency has been removed from `BlenderbotTokenizer` and the RobertaFastTokenizer dependency has been removed from `BlenderbotTokenizerFast`.
I did encounter an error when running `pytest tests/models/blenderbot/test_tokenization_blenderbot.py`, and I got the following error:
```
========================================================================= test session starts =========================================================================
platform darwin -- Python 3.10.4, pytest-7.1.3, pluggy-1.0.0
rootdir: /Users/rchan/Library/CloudStorage/OneDrive-TheAlanTuringInstitute/huggingface/transformers, configfile: setup.cfg
collected 4 items
tests/models/blenderbot/test_tokenization_blenderbot.py F... [100%]
============================================================================== FAILURES ===============================================================================
___________________________________________________ Blenderbot3BTokenizerTests.test_3B_tokenization_same_as_parlai ____________________________________________________
self = <tests.models.blenderbot.test_tokenization_blenderbot.Blenderbot3BTokenizerTests testMethod=test_3B_tokenization_same_as_parlai>
def test_3B_tokenization_same_as_parlai(self):
assert self.tokenizer_3b.add_prefix_space
> assert self.tokenizer_3b([" Sam", "Sam"]).input_ids == [[5502, 2], [5502, 2]]
E assert [[1, 5502, 2], [1, 5502, 2]] == [[5502, 2], [5502, 2]]
E At index 0 diff: [1, 5502, 2] != [5502, 2]
E Use -v to get more diff
tests/models/blenderbot/test_tokenization_blenderbot.py:48: AssertionError
------------------------------------------------------------------------ Captured stderr call -------------------------------------------------------------------------
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
========================================================================== warnings summary ===========================================================================
src/transformers/testing_utils.py:28
/Users/rchan/Library/CloudStorage/OneDrive-TheAlanTuringInstitute/huggingface/transformers/src/transformers/testing_utils.py:28: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
from distutils.util import strtobool
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================================================================= short test summary info =======================================================================
FAILED tests/models/blenderbot/test_tokenization_blenderbot.py::Blenderbot3BTokenizerTests::test_3B_tokenization_same_as_parlai - assert [[1, 5502, 2], [1, 5502, 2]...
=============================================================== 1 failed, 3 passed, 1 warning in 2.15s ================================================================
```
Any idea on what I have done wrong here?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19411/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19411",
"html_url": "https://github.com/huggingface/transformers/pull/19411",
"diff_url": "https://github.com/huggingface/transformers/pull/19411.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19411.patch",
"merged_at": 1665408322000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19410
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19410/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19410/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19410/events
|
https://github.com/huggingface/transformers/pull/19410
| 1,401,357,444
|
PR_kwDOCUB6oc5AZ3Ro
| 19,410
|
Removed Bert and XML Dependency from Herbert
|
{
"login": "harry7337",
"id": 75776208,
"node_id": "MDQ6VXNlcjc1Nzc2MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/75776208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harry7337",
"html_url": "https://github.com/harry7337",
"followers_url": "https://api.github.com/users/harry7337/followers",
"following_url": "https://api.github.com/users/harry7337/following{/other_user}",
"gists_url": "https://api.github.com/users/harry7337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harry7337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harry7337/subscriptions",
"organizations_url": "https://api.github.com/users/harry7337/orgs",
"repos_url": "https://api.github.com/users/harry7337/repos",
"events_url": "https://api.github.com/users/harry7337/events{/privacy}",
"received_events_url": "https://api.github.com/users/harry7337/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Related to #19303 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Thank you so much @sgugger for your guidance! I think it should be good to go now!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19410/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19410",
"html_url": "https://github.com/huggingface/transformers/pull/19410",
"diff_url": "https://github.com/huggingface/transformers/pull/19410.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19410.patch",
"merged_at": 1665157749000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19409
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19409/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19409/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19409/events
|
https://github.com/huggingface/transformers/pull/19409
| 1,401,303,347
|
PR_kwDOCUB6oc5AZrtm
| 19,409
|
Clip device map
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Looks good to me, pinging @sgugger \r\n\r\nWhat happened with your branch? :smile: ",
"_The documentation is not available anymore as the PR was closed or merged._",
"> Looks good to me, pinging @sgugger\r\n> \r\n> What happened with your branch? smile\r\n\r\nYeah sorry about this :sweat_smile: "
] | 1,665
| 1,665
| 1,665
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19409/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19409",
"html_url": "https://github.com/huggingface/transformers/pull/19409",
"diff_url": "https://github.com/huggingface/transformers/pull/19409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19409.patch",
"merged_at": 1665159555000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19408
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19408/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19408/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19408/events
|
https://github.com/huggingface/transformers/pull/19408
| 1,401,186,084
|
PR_kwDOCUB6oc5AZSZG
| 19,408
|
Remove Dependency between Bart and LED (slow/fast)
|
{
"login": "Infrared1029",
"id": 60873139,
"node_id": "MDQ6VXNlcjYwODczMTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/60873139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Infrared1029",
"html_url": "https://github.com/Infrared1029",
"followers_url": "https://api.github.com/users/Infrared1029/followers",
"following_url": "https://api.github.com/users/Infrared1029/following{/other_user}",
"gists_url": "https://api.github.com/users/Infrared1029/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Infrared1029/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Infrared1029/subscriptions",
"organizations_url": "https://api.github.com/users/Infrared1029/orgs",
"repos_url": "https://api.github.com/users/Infrared1029/repos",
"events_url": "https://api.github.com/users/Infrared1029/events{/privacy}",
"received_events_url": "https://api.github.com/users/Infrared1029/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger hopefully this does it?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Just tried locally your PR and found the reason why we can't have a global copied from on the tokenizer (sorry I didn't spot it earlier): the `_pad` method is overwritten.\r\nSo instead we need to applied the copied from on each method (except `_pad`) and not copy the whole class.\r\n\r\nSorry about that!",
"> Just tried locally your PR and found the reason why we can't have a global copied from on the tokenizer (sorry I didn't spot it earlier): the `_pad` method is overwritten. So instead we need to applied the copied from on each method (except `_pad`) and not copy the whole class.\r\n> \r\n> Sorry about that!\r\n\r\nso basically, write those copy comments again on both the slow and fast tokenizers?",
"Yeah, sorry",
"> Yeah, sorry\r\n\r\nOh dont be, on it now :D, thanks for the quick reply!",
"@sgugger anything left?",
"btw, the `mask_token` method in the fast tokenizer, there is a method and a setter , but i have the same comment on both (Copied from `BartTokenizerFast.mask_token)` is that the right way or am i supposed to keep the one above the method only and ignore the setter's one?",
"thanks a lot for the quick replies, what else is left?",
"Should be good now, just waiting for all tests to pass :-)",
"oh finally, the green light"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Removes the dependency between LED and Bart
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19408/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19408",
"html_url": "https://github.com/huggingface/transformers/pull/19408",
"diff_url": "https://github.com/huggingface/transformers/pull/19408.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19408.patch",
"merged_at": 1665159590000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19407
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19407/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19407/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19407/events
|
https://github.com/huggingface/transformers/pull/19407
| 1,400,996,927
|
PR_kwDOCUB6oc5AYpqb
| 19,407
|
Removed XML and Bert dependency from Herbert tokenizer
|
{
"login": "harry7337",
"id": 75776208,
"node_id": "MDQ6VXNlcjc1Nzc2MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/75776208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harry7337",
"html_url": "https://github.com/harry7337",
"followers_url": "https://api.github.com/users/harry7337/followers",
"following_url": "https://api.github.com/users/harry7337/following{/other_user}",
"gists_url": "https://api.github.com/users/harry7337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harry7337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harry7337/subscriptions",
"organizations_url": "https://api.github.com/users/harry7337/orgs",
"repos_url": "https://api.github.com/users/harry7337/repos",
"events_url": "https://api.github.com/users/harry7337/events{/privacy}",
"received_events_url": "https://api.github.com/users/harry7337/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19407). All of your documentation changes will be reflected on that endpoint.",
"Hi there! Thanks a lot for working on this, but your PR shows a diff of 577 files when it should just be the two tokenizer file you are touching.\r\nI think it might be because you have a different version fo black that we are using in your environment. Could you try doing `pip install -e. [quality]`?",
"Superseded by #19410 "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19303. Removed the dependency of herbert(slow/fast) tokenizer on bert and xml.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Pinging @sgugger for this issue!
Black seems to be working fine on my system, but shows errors in the automated tests for files that I haven't modified. For example: wav2vec2 and blenderbot_small have only style changes but is failing a run_tests_tf and run_tests_torch respectively. Let me know if this is ok!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19407/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19407",
"html_url": "https://github.com/huggingface/transformers/pull/19407",
"diff_url": "https://github.com/huggingface/transformers/pull/19407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19407.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19406
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19406/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19406/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19406/events
|
https://github.com/huggingface/transformers/pull/19406
| 1,400,994,386
|
PR_kwDOCUB6oc5AYpG8
| 19,406
|
Decouples `XLMProphet` model from `Prophet`
|
{
"login": "srhrshr",
"id": 2330069,
"node_id": "MDQ6VXNlcjIzMzAwNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2330069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srhrshr",
"html_url": "https://github.com/srhrshr",
"followers_url": "https://api.github.com/users/srhrshr/followers",
"following_url": "https://api.github.com/users/srhrshr/following{/other_user}",
"gists_url": "https://api.github.com/users/srhrshr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srhrshr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srhrshr/subscriptions",
"organizations_url": "https://api.github.com/users/srhrshr/orgs",
"repos_url": "https://api.github.com/users/srhrshr/repos",
"events_url": "https://api.github.com/users/srhrshr/events{/privacy}",
"received_events_url": "https://api.github.com/users/srhrshr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger - the requested changes have all been done. Thanks for your review! "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
@sgugger ,
Per the issue #19303, the `Prophet` model dependency is removed from `XLMProphet` and it now directly inherits from `PretrainedModel`.
- [As discussed in a different PR review](https://github.com/huggingface/transformers/pull/19346#discussion_r988069210) , I've moved some of the docstring examples to inside the corresponding docs/source location. Let me know if there are some tests you want me to run.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Thanks for reviewing the PR!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19406/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19406",
"html_url": "https://github.com/huggingface/transformers/pull/19406",
"diff_url": "https://github.com/huggingface/transformers/pull/19406.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19406.patch",
"merged_at": 1665499524000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19405
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19405/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19405/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19405/events
|
https://github.com/huggingface/transformers/pull/19405
| 1,400,983,705
|
PR_kwDOCUB6oc5AYm7O
| 19,405
|
Remove unneded words from audio-related feature extractors
|
{
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Lgtm thanks a lot π"
] | 1,665
| 1,665
| 1,665
|
MEMBER
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19405/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19405",
"html_url": "https://github.com/huggingface/transformers/pull/19405",
"diff_url": "https://github.com/huggingface/transformers/pull/19405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19405.patch",
"merged_at": 1665150773000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19404
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19404/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19404/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19404/events
|
https://github.com/huggingface/transformers/pull/19404
| 1,400,976,483
|
PR_kwDOCUB6oc5AYla9
| 19,404
|
remove RobertaConfig inheritance from MarkupLMConfig
|
{
"login": "D3xter1922",
"id": 59790120,
"node_id": "MDQ6VXNlcjU5NzkwMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/59790120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/D3xter1922",
"html_url": "https://github.com/D3xter1922",
"followers_url": "https://api.github.com/users/D3xter1922/followers",
"following_url": "https://api.github.com/users/D3xter1922/following{/other_user}",
"gists_url": "https://api.github.com/users/D3xter1922/gists{/gist_id}",
"starred_url": "https://api.github.com/users/D3xter1922/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D3xter1922/subscriptions",
"organizations_url": "https://api.github.com/users/D3xter1922/orgs",
"repos_url": "https://api.github.com/users/D3xter1922/repos",
"events_url": "https://api.github.com/users/D3xter1922/events{/privacy}",
"received_events_url": "https://api.github.com/users/D3xter1922/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Great work, thanks a lot! There is a typo in the docstring that is responsible for the failing test. I believe my suggestion should fix it :-)\r\n\r\nThank you for your suggestion.",
"Thanks for your work on this!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Related to #19303
Removes `RobertaConfig` and `BertConfig` dependency from `MarkupLMConfig`. Even though `RobertaConfig` inherits from `BertConfig`, I have changed `MarkupLMConfig` to directly inherit from `PretrainedConfig`.
Added the following arguments in `__init__`:
- `bos_token_id = 0`
- `eos_token_id = 2`
- `position_embedding_type="absolute"`
- `use_cache=True`
- `classifier_dropout=None`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19404/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19404",
"html_url": "https://github.com/huggingface/transformers/pull/19404",
"diff_url": "https://github.com/huggingface/transformers/pull/19404.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19404.patch",
"merged_at": 1665405899000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19403
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19403/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19403/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19403/events
|
https://github.com/huggingface/transformers/pull/19403
| 1,400,911,789
|
PR_kwDOCUB6oc5AYXcb
| 19,403
|
Remove dependency of Bert from Squeezebert tokenizer
|
{
"login": "rchan26",
"id": 44200705,
"node_id": "MDQ6VXNlcjQ0MjAwNzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/44200705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rchan26",
"html_url": "https://github.com/rchan26",
"followers_url": "https://api.github.com/users/rchan26/followers",
"following_url": "https://api.github.com/users/rchan26/following{/other_user}",
"gists_url": "https://api.github.com/users/rchan26/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rchan26/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rchan26/subscriptions",
"organizations_url": "https://api.github.com/users/rchan26/orgs",
"repos_url": "https://api.github.com/users/rchan26/repos",
"events_url": "https://api.github.com/users/rchan26/events{/privacy}",
"received_events_url": "https://api.github.com/users/rchan26/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I see that my code currently fails the style and code consistency checks. When I run `make style`, it seems to change a lot of files and ones that I did not touch. Is this normal?\r\n\r\nI'm also trying to run `make repo-consistency` but keep getting\r\n```\r\npython utils/check_copies.py --fix_and_overwrite\r\nmake: python: No such file or directory\r\nmake: *** [fix-copies] Error 1\r\n```\r\nwhich is strange as I am running this from the root directory...",
"Hi @sgugger, many thanks for the quick replies! I have made the changes you mentioned above, and regarding `make repo-consistency` and `make-style`, it seems like `pip install -e .\"[quality]\"` did the trick!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Hi @sgugger,
Fixes #19303, the BertTokenizer dependency has been removed from `SqueezeBertTokenizer` and the BertTokenizerFast dependency has been removed from `SqueezeBertTokenizerFast`.
I ran `pytest tests/models/squeezebert/test_tokenization_squeezebert.py`, which passed.
Thanks for reviewing this! :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19403/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19403",
"html_url": "https://github.com/huggingface/transformers/pull/19403",
"diff_url": "https://github.com/huggingface/transformers/pull/19403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19403.patch",
"merged_at": 1665156775000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19402
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19402/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19402/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19402/events
|
https://github.com/huggingface/transformers/pull/19402
| 1,400,902,367
|
PR_kwDOCUB6oc5AYVcV
| 19,402
|
Add `OPTForQuestionAnswering`
|
{
"login": "clementapa",
"id": 45719060,
"node_id": "MDQ6VXNlcjQ1NzE5MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/45719060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clementapa",
"html_url": "https://github.com/clementapa",
"followers_url": "https://api.github.com/users/clementapa/followers",
"following_url": "https://api.github.com/users/clementapa/following{/other_user}",
"gists_url": "https://api.github.com/users/clementapa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clementapa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clementapa/subscriptions",
"organizations_url": "https://api.github.com/users/clementapa/orgs",
"repos_url": "https://api.github.com/users/clementapa/repos",
"events_url": "https://api.github.com/users/clementapa/events{/privacy}",
"received_events_url": "https://api.github.com/users/clementapa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Failures are unrelated to this PR, so merging :-)",
"Hi @clementapa , \r\nWhile adding `OPTForQuestionAnswering` did you test if you were able to train (say, fine-tune on squad) a QA model for any of the opt variants? \r\n\r\nI am getting a fast tokenizer error here: https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py#L345 \r\n\r\nEssentially, the `run_qa.py` script requires the model to have a fast tokenizer which is not available for the OPT models. \r\n\r\nThanks,",
"Hey! The Fast tokenizer is available for OPT. Make sure you are using main, as a recent issue with automatic conversion for OPT tokenizer was fixed. See #20823 "
] | 1,665
| 1,681
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds `OPTForQuestionAnswering` in Transformers. The implementation is based on `BloomForQuestionAnswering` #19310. This introduces a new autoregressive model for question answering tasks in the library.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @LysandreJik @ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19402/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19402/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19402",
"html_url": "https://github.com/huggingface/transformers/pull/19402",
"diff_url": "https://github.com/huggingface/transformers/pull/19402.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19402.patch",
"merged_at": 1665408659000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19401
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19401/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19401/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19401/events
|
https://github.com/huggingface/transformers/pull/19401
| 1,400,867,291
|
PR_kwDOCUB6oc5AYN_P
| 19,401
|
Adds DonutSwin to models exportable with ONNX
|
{
"login": "WaterKnight1998",
"id": 41203448,
"node_id": "MDQ6VXNlcjQxMjAzNDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/41203448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WaterKnight1998",
"html_url": "https://github.com/WaterKnight1998",
"followers_url": "https://api.github.com/users/WaterKnight1998/followers",
"following_url": "https://api.github.com/users/WaterKnight1998/following{/other_user}",
"gists_url": "https://api.github.com/users/WaterKnight1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WaterKnight1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WaterKnight1998/subscriptions",
"organizations_url": "https://api.github.com/users/WaterKnight1998/orgs",
"repos_url": "https://api.github.com/users/WaterKnight1998/repos",
"events_url": "https://api.github.com/users/WaterKnight1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/WaterKnight1998/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19401). All of your documentation changes will be reflected on that endpoint.",
"> Hi @WaterKnight1998,\r\n> \r\n> Thanks for your PR. It looks clean.\r\n> \r\n> Nice catch for the `model-type` variable that could be tricky to find: https://huggingface.co/naver-clova-ix/donut-base-finetuned-docvqa/blob/main/config.json#L138\r\n> \r\n> First DocumentQuestionAnswering model added. It's pretty cool!\r\n\r\nI don't see the comment. Do I need to solve anything? \r\n\r\nHowever, for testing locally I was using next code but I can't export the model :(\r\n\r\nI exported just encoder like this\r\n```python\r\nfrom transformers import VisionEncoderDecoderModel\r\nmodel = VisionEncoderDecoderModel.from_pretrained(\"naver-clova-ix/donut-base\")\r\nmodel.encoder.save_pretrained(\"./swin\")\r\n```\r\n\r\nThen trying to convert to onnx I get:\r\n```\r\npython -m transformers.onnx --model=./swin onnx/\r\nLocal PyTorch model found.\r\nFramework not requested. Using torch to export to ONNX.\r\n/home/david/.local/lib/python3.10/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2894.)\r\n return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]\r\nUsing framework PyTorch: 1.12.1+cu116\r\nTraceback (most recent call last):\r\n File \"/home/david/micromamba/envs/huggingface/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/home/david/micromamba/envs/huggingface/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/onnx/__main__.py\", line 115, in <module>\r\n main()\r\n File \"/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/onnx/__main__.py\", line 97, in main\r\n onnx_inputs, onnx_outputs = export(\r\n File \"/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/onnx/convert.py\", line 337, in export\r\n return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device)\r\n File \"/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/onnx/convert.py\", line 144, in export_pytorch\r\n model_inputs = config.generate_dummy_inputs(preprocessor, framework=TensorType.PYTORCH)\r\n File \"/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/onnx/config.py\", line 348, in generate_dummy_inputs\r\n raise ValueError(\r\nValueError: Unable to generate dummy inputs for the model. Please provide a tokenizer or a preprocessor.\r\n```\r\n\r\nDo I need to add more code?\r\n\r\n",
"> Do I need to add more code?\r\n\r\nYes, it would help if you overcharged the `generate_dummy_inputs()` function. Like the `LayoutLMv3` model, you need to define the process as a dummy input. ONNX conversion models use one batch (even random dummy data) to follow the data flow through the graph layers.\r\n\r\nCheck this here: https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/layoutlmv3/configuration_layoutlmv3.py#L227-L294\r\n\r\n\r\nThis can help too, it's the base `generate_dummy_inputs()` function : https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/onnx/config.py#L264-L378",
"@ChainYo @lewtun Relative imports fixed and added also the function to generate dummy functions. But when I convert the model into ONNX like this:\r\n\r\n```python\r\nimport transformers\r\nfrom pathlib import Path\r\n\r\n\r\nfrom transformers import VisionEncoderDecoderModel\r\nmodel = VisionEncoderDecoderModel.from_pretrained(\"naver-clova-ix/donut-base\")\r\nmodel.encoder.save_pretrained(\"./swin\")\r\n\r\nfrom transformers.onnx import export\r\nfrom transformers import AutoConfig\r\nfrom transformers.models.donut import *\r\n\r\nonnx_config = AutoConfig.from_pretrained(\"./swin\")\r\nonnx_config = DonutSwinOnnxConfig(onnx_config)\r\n\r\nprocessor = DonutProcessor.from_pretrained(\"naver-clova-ix/donut-base\")\r\nonnx_inputs, onnx_outputs = export(processor, model.encoder, onnx_config, onnx_config.default_onnx_opset, Path(\"model.onnx\"))\r\n```\r\n\r\nI get the following warnings:\r\n```\r\n/home/david/.local/lib/python3.10/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2894.)\r\n return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if num_channels != self.num_channels:\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:220: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if width % self.patch_size[1] != 0:\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:223: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if height % self.patch_size[0] != 0:\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:536: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if min(input_resolution) <= self.window_size:\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:136: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\r\n batch_size, height // window_size, window_size, width // window_size, window_size, num_channels\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:147: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n batch_size = math.floor(windows.shape[0] / (height * width / window_size / window_size))\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:148: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\r\n windows = windows.view(batch_size, height // window_size, width // window_size, window_size, window_size, -1)\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:622: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n was_padded = pad_values[3] > 0 or pad_values[5] > 0\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:623: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if was_padded:\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:411: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\r\n batch_size // mask_shape, mask_shape, self.num_attention_heads, dim, dim\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:682: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\r\n height_downsampled, width_downsampled = (height + 1) // 2, (width + 1) // 2\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:266: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n should_pad = (height % 2 == 1) or (width % 2 == 1)\r\n/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:267: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if should_pad:\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\nWARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.\r\n```\r\n\r\nIs it ok?",
"> Is it ok?\r\n\r\nHi @WaterKnight1998,\r\nDo you get onnx files locally when you export the model?\r\nDid you try to load the file with https://netron.app ?\r\nCould you try to load an InferenceSession with Optimum or Onnx and use the model to see if it works? ",
"> Hi @WaterKnight1998, Do you get onnx files locally when you export the model? \r\n\r\nYes, I get the files\r\n\r\n> Did you try to load the file with https://netron.app ? \r\n\r\nYes, model loaded\r\n\r\n> Could you try to load an InferenceSession with Optimum or Onnx and use the model to see if it works?\r\n\r\nI am testing:\r\n```python\r\nfrom transformers.onnx import validate_model_outputs\r\n\r\nvalidate_model_outputs(\r\n onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation\r\n)\r\n```\r\n\r\nBut python process is killed here in my computer: https://github.com/huggingface/transformers/blob/main/src/transformers/onnx/convert.py#L392\r\n\r\nMaybe too big for CPU?",
"Hi, I tested in Databricks and got this error:\r\n\r\n```\r\n\r\nValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.05213117599487305\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<command-489655835555725> in <module>\r\n 32 \r\n 33 from transformers.onnx import validate_model_outputs\r\n---> 34 validate_model_outputs(\r\n 35 onnx_config, processor, model.encoder, Path(\"model.onnx\"), onnx_outputs, onnx_config.atol_for_validation\r\n 36 )\r\n\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/onnx/convert.py in validate_model_outputs(config, preprocessor, reference_model, onnx_model, onnx_named_outputs, atol, tokenizer)\r\n 440 if not np.allclose(ref_value, ort_value, atol=atol):\r\n 441 logger.info(f\"\\t\\t-[x] values not close enough (atol: {atol})\")\r\n--> 442 raise ValueError(\r\n 443 \"Outputs values doesn't match between reference model and ONNX exported model: \"\r\n 444 f\"Got max absolute difference of: {np.amax(np.abs(ref_value - ort_value))}\"\r\n\r\nValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.05213117599487305\r\n```\r\n\r\nMaybe I need to update anything @ChainYo & @lewtun ? Or is it OK?\r\n",
"> Hi, I tested in Databricks and got this error:\r\n> \r\n> ```\r\n> \r\n> ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got a max absolute difference of: 0.05213117599487305\r\n> ---------------------------------------------------------------------------\r\n> ValueError Traceback (most recent call last)\r\n> <command-489655835555725> in <module>\r\n> 32 \r\n> 33 from transformers.onnx import validate_model_outputs\r\n> ---> 34 validate_model_outputs(\r\n> 35 onnx_config, processor, model.encoder, Path(\"model.onnx\"), onnx_outputs, onnx_config.atol_for_validation\r\n> 36 )\r\n> \r\n> /local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/onnx/convert.py in validate_model_outputs(config, preprocessor, reference_model, onnx_model, onnx_named_outputs, atol, tokenizer)\r\n> 440 if not np.allclose(ref_value, ort_value, atol=atol):\r\n> 441 logger.info(f\"\\t\\t-[x] values not close enough (atol: {atol})\")\r\n> --> 442 raise ValueError(\r\n> 443 \"Outputs values doesn't match between reference model and ONNX exported model: \"\r\n> 444 f\"Got max absolute difference of: {np.amax(np.abs(ref_value - ort_value))}\"\r\n> \r\n> ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got a max absolute difference of: 0.05213117599487305\r\n> ```\r\n> \r\n> Maybe I need to update anything @ChainYo & @lewtun? Or is it OK?\r\n\r\nI didn't think about this but do you have enough RAM locally? Imagine the model is 20Gb you need the double to convert one model (~40Gb) because scripts need to load both models simultaneously.\r\n\r\nThe error I see on Databricks is about `absolute tolerance, which is `1e-5` by default. There are two possibilities:\r\n- You selected the wrong `--feature` in your conversion command (maybe try something other than the default one)\r\n- You need to pass the argument `--atol` to your conversion command with the proper value even if 0.052 seems too much IMO (never go with more than `1e-3`).",
"> > Hi, I tested in Databricks and got this error:\r\n> > ```\r\n> > \r\n> > ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got a max absolute difference of: 0.05213117599487305\r\n> > ---------------------------------------------------------------------------\r\n> > ValueError Traceback (most recent call last)\r\n> > <command-489655835555725> in <module>\r\n> > 32 \r\n> > 33 from transformers.onnx import validate_model_outputs\r\n> > ---> 34 validate_model_outputs(\r\n> > 35 onnx_config, processor, model.encoder, Path(\"model.onnx\"), onnx_outputs, onnx_config.atol_for_validation\r\n> > 36 )\r\n> > \r\n> > /local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/onnx/convert.py in validate_model_outputs(config, preprocessor, reference_model, onnx_model, onnx_named_outputs, atol, tokenizer)\r\n> > 440 if not np.allclose(ref_value, ort_value, atol=atol):\r\n> > 441 logger.info(f\"\\t\\t-[x] values not close enough (atol: {atol})\")\r\n> > --> 442 raise ValueError(\r\n> > 443 \"Outputs values doesn't match between reference model and ONNX exported model: \"\r\n> > 444 f\"Got max absolute difference of: {np.amax(np.abs(ref_value - ort_value))}\"\r\n> > \r\n> > ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got a max absolute difference of: 0.05213117599487305\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > Maybe I need to update anything @ChainYo & @lewtun? Or is it OK?\r\n> \r\n> I didn't think about this but do you have enough RAM locally? Imagine the model is 20Gb you need the double to convert one model (~40Gb) because scripts need to load both models simultaneously.\r\n> \r\n\r\nGood point, I just have 32GB of RAM locally, probably this.\r\n\r\n> The error I see on Databricks is about `absolute tolerance, which is `1e-5` by default. There are two possibilities:\r\n> \r\n> * You selected the wrong `--feature` in your conversion command (maybe try something other than the default one)\r\n\r\nI tested with this:\r\n```python\r\nimport transformers\r\nfrom pathlib import Path\r\n\r\n\r\nfrom transformers import VisionEncoderDecoderModel\r\nmodel = VisionEncoderDecoderModel.from_pretrained(\"naver-clova-ix/donut-base\")\r\nmodel.encoder.save_pretrained(\"./swin\")\r\n\r\nfrom transformers.onnx import export\r\nfrom transformers import AutoConfig\r\nfrom transformers.models.donut import *\r\n\r\nonnx_config = AutoConfig.from_pretrained(\"./swin\")\r\nonnx_config = DonutSwinOnnxConfig(onnx_config)\r\n\r\nprocessor = DonutProcessor.from_pretrained(\"naver-clova-ix/donut-base\")\r\nonnx_inputs, onnx_outputs = export(processor, model.encoder, onnx_config, onnx_config.default_onnx_opset, Path(\"model.onnx\"))\r\n\r\nfrom transformers.onnx import validate_model_outputs\r\n\r\nvalidate_model_outputs(\r\n onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation\r\n)\r\n```\r\n\r\n> * You need to pass the argument `--atol` to your conversion command with the proper value even if 0.052 seems too much IMO (never go with more than `1e-3`).\r\n\r\nIn my config it is set to: \r\n```python\r\n@property\r\n def atol_for_validation(self) -> float:\r\n return 1e-4\r\n```\r\nShould I test with 1e-3? But I am getting 0.05\r\n\r\nI don't get why difference is too bight, maybe the warnings that I mentioned in other comment?\r\n\r\n```\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if num_channels != self.num_channels:\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:220: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if width % self.patch_size[1] != 0:\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:223: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if height % self.patch_size[0] != 0:\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:536: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if min(input_resolution) <= self.window_size:\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:136: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\r\n batch_size, height // window_size, window_size, width // window_size, window_size, num_channels\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:147: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n batch_size = math.floor(windows.shape[0] / (height * width / window_size / window_size))\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:148: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\r\n windows = windows.view(batch_size, height // window_size, width // window_size, window_size, window_size, -1)\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:622: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n was_padded = pad_values[3] > 0 or pad_values[5] > 0\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:623: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if was_padded:\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:411: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\r\n batch_size // mask_shape, mask_shape, self.num_attention_heads, dim, dim\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:682: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\r\n height_downsampled, width_downsampled = (height + 1) // 2, (width + 1) // 2\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:266: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n should_pad = (height % 2 == 1) or (width % 2 == 1)\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:267: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n```",
"Hi again @ChainYo & @lewtun I tested validate_model_outputs in different setups: \r\n\r\n- Nvidia T4: 0.01 difference\r\n- Nvidia V100: 0.06 difference\r\n- CPU: 16 Cores & 56GB RAM: 0.04 difference\r\n\r\nI don't know where is the problem. What can I look at?",
"> I don't know where is the problem. What can I look at?\r\n\r\nI think it just means that it's a bit random. I don't think it's linked to the hardware, test to check the atol like 10k times per hardware.\r\n\r\nIMO it seems evident that atol=1e-2 could do the trick, but it looks terrible to accept atol > 1e-3.\r\n\r\nTo return to the warning, you had earlier while converting the model: did you check if all layers are implemented in ONNX?",
"Hey @WaterKnight1998 I recently implemented a fix in #19475 that was causing all the Swin models to have incorrect ONNX graphs. Could you first try rebasing on `main` and checking the tolerance again?",
"> Hey @WaterKnight1998 I recently implemented a fix in #19475 that was causing all the Swin models to have incorrect ONNX graphs. Could you first try rebasing on `main` and checking the tolerance again?\r\n\r\nHi @lewtun If if you in the PR i rebased and tested again, I am seeing the same issue: \r\n\r\n```\r\nValueError Traceback (most recent call last)\r\n<command-489655835555726> in <module>\r\n 1 from transformers.onnx import validate_model_outputs\r\n----> 2 validate_model_outputs(\r\n 3 onnx_config, processor, model.encoder, Path(\"model.onnx\"), onnx_outputs, onnx_config.atol_for_validation\r\n 4 )\r\n\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-f0e538e7-c99a-4698-9d4a-c04070b5c780/lib/python3.8/site-packages/transformers/onnx/convert.py in validate_model_outputs(config, preprocessor, reference_model, onnx_model, onnx_named_outputs, atol, tokenizer)\r\n 453 bad_indices = np.logical_not(np.isclose(ref_value, ort_value, atol=atol))\r\n 454 logger.info(f\"\\t\\t-[x] values not close enough (atol: {atol})\")\r\n--> 455 raise ValueError(\r\n 456 \"Outputs values doesn't match between reference model and ONNX exported model: \"\r\n 457 f\"Got max absolute difference of: {np.amax(np.abs(ref_value - ort_value))} for \"\r\n\r\nValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.06693840026855469 for [ -2.359991 4.654682 -14.478863 ... 5.7127304 1.8854475\r\n 0.7024307] vs [ -2.3598232 4.65485 -14.47826 ... 5.712929 1.8853188\r\n 0.7022476]\r\n```",
"Hi again, @lewtun & @ChainYo I have checked this implementation and original Swin Transformer, the only difference is that normalization layer is not present. Maybe that's the reason?",
"> Hi again, @lewtun & @ChainYo I have checked this implementation and original Swin Transformer, the only difference is that normalization layer is not present. Maybe that's the reason?\r\n\r\nThanks for that insight @WaterKnight1998, although I'd be surprised if that's the source of the issue. I'll take a closer look at the dummy data generation ASAP",
"Hi @WaterKnight1998 now that #19254 has been merged, can't you export the Donut checkpoints directly using this feature:\r\n\r\n```\r\npython -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx\r\n```\r\n\r\nMy understanding is that Donut falls under the general class of vision encoder-decoder models, so a separate ONNX export might not be needed",
"> Hi @WaterKnight1998 now that #19254 has been merged, can't you export the Donut checkpoints directly using this feature:\r\n> \r\n> ```\r\n> python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx\r\n> ```\r\n> \r\n> My understanding is that Donut falls under the general class of vision encoder-decoder models, so a separate ONNX export might not be needed\r\n\r\nHi @lewtun I tested this but this is not working owing to the tollerance issue. In addition, maybe some users just want to export the encoder part. adding @NielsRogge as he implemeted this in #18488\r\n",
"> Hi @WaterKnight1998 now that #19254 has been merged, can't you export the Donut checkpoints directly using this feature:\r\n> \r\n> ```\r\n> python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx\r\n> ```\r\n> \r\n> My understanding is that Donut falls under the general class of vision encoder-decoder models, so a separate ONNX export might not be needed\r\n\r\n@lewtun While converting facing output value error (for the same command mentioned above)\r\n\r\n```\r\nValidating ONNX model...\r\n\t-[β] ONNX model output names match reference model ({'last_hidden_state'})\r\n\t- Validating ONNX Model output \"last_hidden_state\":\r\n\t\t-[β] (3, 1200, 1024) matches (3, 1200, 1024)\r\n\t\t-[x] values not close enough (atol: 1e-05)\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py\", line 180, in <module>\r\n main()\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py\", line 113, in main\r\n args.atol if args.atol else encoder_onnx_config.atol_for_validation,\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py\", line 456, in validate_model_outputs\r\n \"Outputs values doesn't match between reference model and ONNX exported model: \"\r\nValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.0018157958984375 for [ 1.5980988 0.5988426 -14.8206215 ... -5.1114273 4.5024166\r\n 2.8833218] vs [ 1.5982218 0.59886694 -14.820812 ... -5.1115417 4.502474\r\n 2.883381 ]\r\n```\r\n\r\n\r\nBut separately I am able to convert the encoder and decoder model to ONNX as well as verified the output shape, that went well. But I don't know how to implement ```model.generate()``` instead of ```model.run``` for the decoder part.\r\n\r\n@lewtun @WaterKnight1998 Any suggestions here ( I can share the Colab if required).\r\n\r\nThanks and Regards.",
"> But separately I am able to convert the encoder and decoder model to ONNX as well as verified the output shape, that went well. But I don't know how to implement `model.generate()` instead of `model.run` for the decoder part.\r\n\r\n@BakingBrains Using the code from my PR to do the encoder conversion?",
"@lewtun and @WaterKnight1998 any updates on the decoder? I am able to convert the decoder model. Not sure if that's the right method. (but the output shape from Donut decoder and ONNX decoder is same)",
"Hi, @lewtun @ChainYo @BakingBrains any news on this? I need this to get the model into production :(",
"@sgugger could you help us? We are looking forward for this feature π",
"Hey @WaterKnight1998 I'm taking a look at this, but it's turning out to be tricky to figure out why where the discrepancy arises with the ONNX graph vs PyTorch model. ",
"> Hey @WaterKnight1998 I'm taking a look at this, but it's turning out to be tricky to figure out why where the discrepancy arises with the ONNX graph vs PyTorch model.\r\n\r\nThank you very much for looking at it π",
"FYI if you need a temporary workaround and are willing to tolerate some error on the decoder, you can export one of the donut checkpoints on the `main` branch with:\r\n\r\n```\r\npython -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx --atol 3e-3\r\n```\r\n\r\nThis will produce two ONNX files (`encoder_model.onnx` and `decoder_onnx.model`) that you can then run inference with. \r\n\r\n",
"> But separately I am able to convert the encoder and decoder model to ONNX as well as verified the output shape, that went well. But I don't know how to implement `model.generate()` instead of `model.run` for the decoder part.\r\n\r\nGood question @BakingBrains ! As of now, you'll have to roll your own generation loop with `onnxruntime`. An alternative would be to implement an `ORTModelForVisionSeq2Seq` in `optimum`, similar to how @mht-sharma is doing this for Whisper: https://github.com/huggingface/optimum/pull/420/files#diff-77c4bfa5fbc9262eda15bbbc01d9796a0daa33e6725ca41e1cfe600a702d0bfc",
"> > But separately I am able to convert the encoder and decoder model to ONNX as well as verified the output shape, that went well. But I don't know how to implement `model.generate()` instead of `model.run` for the decoder part.\r\n> \r\n> Good question @BakingBrains ! As of now, you'll have to roll your own generation loop with `onnxruntime`. An alternative would be to implement an `ORTModelForVisionSeq2Seq` in `optimum`, similar to how @mht-sharma is doing this for Whisper: https://github.com/huggingface/optimum/pull/420/files#diff-77c4bfa5fbc9262eda15bbbc01d9796a0daa33e6725ca41e1cfe600a702d0bfc\r\n\r\nThank you @lewtun. Got it.",
"> FYI if you need a temporary workaround and are willing to tolerate some error on the decoder, you can export one of the donut checkpoints on the `main` branch with:\r\n> \r\n> ```\r\n> python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx --atol 3e-3\r\n> ```\r\n> \r\n> This will produce two ONNX files (`encoder_model.onnx` and `decoder_onnx.model`) that you can then run inference with.\r\n\r\nOk, thank you very much. I hope you find a solution and we can merge this branch.",
"I've created an issue to track the issue with specifically exporting Donut checkpoints: https://github.com/huggingface/transformers/issues/19983\r\n\r\n@WaterKnight1998 can you please share some code snippets on how you currently use the DonutSwin models for document QA and image classification? If I'm not mistaken, inference with these models is only supported via the `VisionEncoderDecoder` model, so once the above issue is resolved you should be able to use the export without needing the new tasks included in this PR",
"> I've created an issue to track the issue with specifically exporting Donut checkpoints: #19983\r\n> \r\n> @WaterKnight1998 can you please share some code snippets on how you currently use the DonutSwin models for document QA and image classification? If I'm not mistaken, inference with these models is only supported via the `VisionEncoderDecoder` model, so once the above issue is resolved you should be able to use the export without needing the new tasks included in this PR\r\n\r\nYes, you are right, maybe we can remove those tasks. However, I think it will be good to allow users to export the encoder independently. Maybe some wants to re-use it for a different model or architecture"
] | 1,665
| 1,670
| 1,670
|
NONE
| null |
# What does this PR do?
Fixes #16308
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@lewtun & @ChainYo for ONNX and @NielsRogge for Donut and Document Question Answering.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19401/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19401/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19401",
"html_url": "https://github.com/huggingface/transformers/pull/19401",
"diff_url": "https://github.com/huggingface/transformers/pull/19401.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19401.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19400
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19400/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19400/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19400/events
|
https://github.com/huggingface/transformers/pull/19400
| 1,400,829,576
|
PR_kwDOCUB6oc5AYGFW
| 19,400
|
Removes `ProphetNet` config dependency from `XLM-ProphetNet` config
|
{
"login": "srhrshr",
"id": 2330069,
"node_id": "MDQ6VXNlcjIzMzAwNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2330069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srhrshr",
"html_url": "https://github.com/srhrshr",
"followers_url": "https://api.github.com/users/srhrshr/followers",
"following_url": "https://api.github.com/users/srhrshr/following{/other_user}",
"gists_url": "https://api.github.com/users/srhrshr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srhrshr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srhrshr/subscriptions",
"organizations_url": "https://api.github.com/users/srhrshr/orgs",
"repos_url": "https://api.github.com/users/srhrshr/repos",
"events_url": "https://api.github.com/users/srhrshr/events{/privacy}",
"received_events_url": "https://api.github.com/users/srhrshr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
@sgugger ,
Per the issue #19303, the `ProphetNet` config dependency is removed from `XLMProphetNetConfig` and it now directly inherits from `PretrainedConfig`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19400/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19400",
"html_url": "https://github.com/huggingface/transformers/pull/19400",
"diff_url": "https://github.com/huggingface/transformers/pull/19400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19400.patch",
"merged_at": 1665149184000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19399
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19399/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19399/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19399/events
|
https://github.com/huggingface/transformers/issues/19399
| 1,400,685,915
|
I_kwDOCUB6oc5TfMVb
| 19,399
|
`device_map="auto"` fails for GPT2 on CPU
|
{
"login": "dirkgr",
"id": 920638,
"node_id": "MDQ6VXNlcjkyMDYzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dirkgr",
"html_url": "https://github.com/dirkgr",
"followers_url": "https://api.github.com/users/dirkgr/followers",
"following_url": "https://api.github.com/users/dirkgr/following{/other_user}",
"gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions",
"organizations_url": "https://api.github.com/users/dirkgr/orgs",
"repos_url": "https://api.github.com/users/dirkgr/repos",
"events_url": "https://api.github.com/users/dirkgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dirkgr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Workaround: Don't use `device_map` on CPU π€·π» ",
"cc @sgugger ",
"Yes, `device_map=\"auto\"` is not supported in CPU only environments.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Should be supported now on the main branch of Accelerate!",
"Yay!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
### System Info
Python 3.9, Mac, `transformers==4.21.3`
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
On a box with no GPUs:
```Python
transformers.AutoModelForCausalLM.from_pretrained("gpt2", device_map="auto")
```
### Expected behavior
I'd expect a model. I get an exception.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19399/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19398
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19398/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19398/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19398/events
|
https://github.com/huggingface/transformers/pull/19398
| 1,400,627,855
|
PR_kwDOCUB6oc5AXbbF
| 19,398
|
Removed Bert and XML dependency from herbert
|
{
"login": "harry7337",
"id": 75776208,
"node_id": "MDQ6VXNlcjc1Nzc2MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/75776208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harry7337",
"html_url": "https://github.com/harry7337",
"followers_url": "https://api.github.com/users/harry7337/followers",
"following_url": "https://api.github.com/users/harry7337/following{/other_user}",
"gists_url": "https://api.github.com/users/harry7337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harry7337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harry7337/subscriptions",
"organizations_url": "https://api.github.com/users/harry7337/orgs",
"repos_url": "https://api.github.com/users/harry7337/repos",
"events_url": "https://api.github.com/users/harry7337/events{/privacy}",
"received_events_url": "https://api.github.com/users/harry7337/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes issue #19303. Removed dependency of herbert tokenizer on bert and xml tokenizers.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19398/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19398",
"html_url": "https://github.com/huggingface/transformers/pull/19398",
"diff_url": "https://github.com/huggingface/transformers/pull/19398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19398.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19397
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19397/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19397/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19397/events
|
https://github.com/huggingface/transformers/issues/19397
| 1,400,549,054
|
I_kwDOCUB6oc5Teq6-
| 19,397
|
Hyperparameter Sweep for Selection of Best Pre-trained model
|
{
"login": "ss2342",
"id": 39809317,
"node_id": "MDQ6VXNlcjM5ODA5MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/39809317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ss2342",
"html_url": "https://github.com/ss2342",
"followers_url": "https://api.github.com/users/ss2342/followers",
"following_url": "https://api.github.com/users/ss2342/following{/other_user}",
"gists_url": "https://api.github.com/users/ss2342/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ss2342/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ss2342/subscriptions",
"organizations_url": "https://api.github.com/users/ss2342/orgs",
"repos_url": "https://api.github.com/users/ss2342/repos",
"events_url": "https://api.github.com/users/ss2342/events{/privacy}",
"received_events_url": "https://api.github.com/users/ss2342/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### Feature request
Perhaps I have not been able to figure this out, but the way that the current hyperparameter_search function is set up is that it only allows you to pass in hyperparameters that are in the TrainingArguments. However, one key hyperparameter that we are not currently able to pass in the model types themselves. If I want to pass in a list of models that I would like to try out to hyperparameter space, it is not possible to do with the current set up. For example I want to try out passing in something like this to my hyperparameter space.
`model_type = trial.suggest_categorical(["bert-base-uncased", "roberta-base", "xlnet"])`
### Motivation
Being able to pass in model names as part of the hp_space could be very useful especially when one is trying to determine which models might be best for their use case. Compiling a list of different models after reading literature and passing in that compiled list to a hyperparameter sweep could be very useful.
It could be used in this manner:
```
model_type = trial.suggest_categorical(["bert-base-uncased", "roberta-base", "xlnet"])
epochs = trial.suggest_categorical("epochs", EPOCHS)
batch_size = trial.suggest_categorical("batch_size", BATCH_SIZE)
learning_rate = trial.suggest_categorical("learning_rate", LEARNING_RATES)
scheduler = trial.suggest_categorical("scheduler", SCHEDULERS)
model_name = trial.suggest_categorical("model_name", MODEL_NAMES)
hp_space = {
"model_name": model_name,
"batch_size": batch_size,
"learning_rate": learning_rate,
"scheduler": scheduler,
"epochs": epochs,
}
## Passing it to trainer
trainer = Trainer(
training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.hyperparameter_search(hp_space=hp_space)
```
### Your contribution
I would love to help work on this issue, if a solution does not possibly exist.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19397/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19397/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19396
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19396/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19396/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19396/events
|
https://github.com/huggingface/transformers/issues/19396
| 1,400,404,918
|
I_kwDOCUB6oc5TeHu2
| 19,396
|
Strange behavior of translation (text generation) pipelines
|
{
"login": "Fikavec",
"id": 83672821,
"node_id": "MDQ6VXNlcjgzNjcyODIx",
"avatar_url": "https://avatars.githubusercontent.com/u/83672821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fikavec",
"html_url": "https://github.com/Fikavec",
"followers_url": "https://api.github.com/users/Fikavec/followers",
"following_url": "https://api.github.com/users/Fikavec/following{/other_user}",
"gists_url": "https://api.github.com/users/Fikavec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fikavec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fikavec/subscriptions",
"organizations_url": "https://api.github.com/users/Fikavec/orgs",
"repos_url": "https://api.github.com/users/Fikavec/repos",
"events_url": "https://api.github.com/users/Fikavec/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fikavec/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @Fikavec π \r\n\r\nText generation can be very tricky, as you've just explained. The quality of the generated text (i.e. the translation) depends on two things: the model and the generation method.\r\n\r\nRegarding the model, my suggestion would be to use a larger model OR a model that contains a single language pair (as opposed to multilingual). You can use the language tags on the Hugging Face Hub π€ to help you navigate the sea of models.\r\n\r\nRegarding the generation method, you've already mentioned the blog post I usually redirect to in this sort of issues :) If you force `min_length`, the model tends to hallucinate after it runs out of the original content, so I highly advise not to use it. However, if you don't do it, you may get a too short output (your first example) -- in that case, you may try playing with the [`length_penalty`](https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.length_penalty) parameter (which only has impact with `num_beams`>1).\r\n\r\nIf these two sets of tips do not yield successful results, I still have good news for you -- we are working to implement a new generation strategy which may help in your case (https://github.com/huggingface/transformers/issues/19182) :)",
"Thanks @gante for the explanation and work in the greatest project! I can't figure out is this issue a feature of the huggingface generators implementation or the original fairseq translation models? Translation is very specific text generation task where precission output length is critical - if output length or other generation parameters is necessary for correct translation they can be predicted by special model on top of the tokenizer before translation generation. [#19182](https://github.com/huggingface/transformers/issues/19182) is interesting, but after spent a lot of time for searching parameters manualy iβm think that create only one formula for 40 000 translation directions is a miracle. Maybe fairseq team may train model for predict best genreration for 200+ languages on their parallel learning data, as the language definition model has trained and, in the future of generators development, models for selecting the best generation parameters will become a standard step after tokenization or a parameter of generator functions as generate(input_text, params_predictor=predict_best_params_model) and predict_best_params_models separately developed and trained for different tasks (translation, qa, [prompt engineering](https://blog.andrewcantino.com/blog/2021/04/21/prompt-engineering-tips-and-tricks/), etc.) by the authors of generative models and community with special tests sets and metrics. What do you think about this?",
"> if output length or other generation parameters is necessary for correct translation\r\n\r\nIt is not -- generation ends when the model predicts a special token ([`eos_token_id`](https://huggingface.co/facebook/nllb-200-distilled-600M/blob/main/config.json#L20)) OR when the generation length reaches `max_length`. This is why you should add a large `max_length`, so the translation is not constrained by it :)\r\n\r\nAs for your other question, as you wrote, setting the parameters depends on the model itself and your goals -- there is no silver bullet that would fit everyone. However, we have a library that might be of your interest: [evaluate](https://huggingface.co/docs/evaluate/index)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
Models:
- NLLB
- M2M100
Example - [[QUESTION] model translates only a part of the text](https://huggingface.co/facebook/nllb-200-distilled-600M/discussions/6)
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
translator = pipeline('translation', model=model, tokenizer=tokenizer, src_lang="ces_Latn", tgt_lang='eng_Latn',device=0)
# Text with 3 sentences: 1) Zuzka bydlΓ v panelΓ‘ku na 9 podlaΕΎΓ. 2) AniΔka bydlΓ o 3 podlaΕΎΓ vΓ½Ε‘e. 3) Na kterΓ©m podlaΕΎΓ bydlΓ AniΔka?
text="Zuzka bydlΓ v panelΓ‘ku na 9 podlaΕΎΓ. AniΔka bydlΓ o 3 podlaΕΎΓ vΓ½Ε‘e. Na kterΓ©m podlaΕΎΓ bydlΓ AniΔka?"
translator(text, max_length=512, num_beams=5,)
```
Outputs only one sentence (**2 sentences lost**):
> [{'translation_text': 'Zuzka lives in a nine-story penthouse, AniΔka lives three floors up.'}]
If we add to translator parameter min_length like [how-to-generate article](https://huggingface.co/blog/how-to-generate):
`translator(text, max_length=512, num_beams=5, min_length=512 )`
(for many languages (ja, zh, etc) we don't know translated length in tokens, but we don't want to lose the text and set min_length bigger)
It's output **translated text with repeates**:
> {'translation_text': "Zuzka lives in a boarding house on the ninth floor, AniΔka lives three floors upstairs, which floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka lives on, what floor does AniΔka lives on, what floor does AniΔka lives on, what floor does AniΔka lives on, what floor does AniΔka lives on, what floor does she lives on, what floor does AniΔka lives on, what floor does she lives on, what floor does she lives on, what floor, what floor does she lives on, what floor she lives on, what floor, and what floor she lives on the floor, and what floor, and what is she lives on the floor, and what is the floor, and what is the floor of the floor, and what is the floor, and what is the floor, and what is the floor, and what is the floor, and what is the floor, and what is the floor, and what is the floor, and what is the floor, and what does she's on the floor, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, what is, and what is, what is, what is, what is, and what is, what is, and what is, what is, and what is, what is, what is, and what is, what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, what is, and what is, what is, and what is, and what is, what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is"}
If we add many other parameters combinations:
`translator(text, max_length=512, min_length=512, num_beams=5, no_repeat_ngram_size=3, do_sample=True, temperature=1.5, top_p=0.9, early_stopping=True, remove_invalid_values=True)
`
The translation will contain **generated text that was not in the original sentence:**
> [{'translation_text': "Zuzka's living in a penthouse on the ninth floor, AniΔka's in a three story apartment, which floor does AniΔka reside on, and what floor is the building on which the building is housed, and how are you supposed to know where she's staying, so what's the floor where the apartment is on the 9th floor... and what is the first floor where AniΔka is staying... and how is the second floor of the house where the house is, so... what floor does she live on, where's AniΔka, the third floor, and where is AniΔka staying in the apartment on the 3rd floor, where you can't find her room, where she can'd say she'd like to go on her own, and you'd wanna know what to do with her room in the next room, so you can I'd tell me that she can be sure that you's not going to be happy with the room to do it, right now, that is, it's all right, you know, right or at least I's right, and I don't think that she't, and that't know that they's what I'll have something that, and we'll want you know that you can be honestly, you'll know that'll be honest that you, right, I mean that I'm sure, you can tell you't you will be right, that that it'll say it't be all right or whatever you know about that you will, you don're not that, but it'd you've got to you know it'm gonna be true, you say that you know right, if they't that's going to me that, I't say, and it' and that, that I will be true or you won'll always, and is, and she'll let me, you will not that'm right, yes or what you' will be that that right, but, and will be, you are gonna be safe to you'l right, or that that'lll be true that we't ever, and yes, but I'l be, right right, they'm going to say, she will be honest or not gonna say that we are, and, that're all right right that he is, you gonna be, but you'"}]
What parameters should be used to get the correct translation of the correct length for many languages with unknown translation lengths? Why does text generation start instead of translation? This is behavior of transformers pipelines or translation models?
### Expected behavior
English translation with 3 sentences:
- Zuzka lives in a block of flats on 9 floors.
- Anna lives 3 floors above.
- Which floor does Anicka live on?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19396/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19395
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19395/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19395/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19395/events
|
https://github.com/huggingface/transformers/issues/19395
| 1,400,352,424
|
I_kwDOCUB6oc5Td66o
| 19,395
|
XLMRobertaTokenizerFast Error
|
{
"login": "elmadany",
"id": 3743657,
"node_id": "MDQ6VXNlcjM3NDM2NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3743657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elmadany",
"html_url": "https://github.com/elmadany",
"followers_url": "https://api.github.com/users/elmadany/followers",
"following_url": "https://api.github.com/users/elmadany/following{/other_user}",
"gists_url": "https://api.github.com/users/elmadany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elmadany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elmadany/subscriptions",
"organizations_url": "https://api.github.com/users/elmadany/orgs",
"repos_url": "https://api.github.com/users/elmadany/repos",
"events_url": "https://api.github.com/users/elmadany/events{/privacy}",
"received_events_url": "https://api.github.com/users/elmadany/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@SaulLu",
"Maybe of interest to @ArthurZucker as well!",
"Hi @elmadany\r\n\r\nThe conversion of a vocabulary coming from sentencepiece to a fast version of the tokenizer is indeed an operation which can take time, but it is an operation to be carried out only once because once loaded you can save the converted version with the fast format not to have to remake this operation the next time.\r\n\r\nOn the other hand, I am a little more concerned when you say that it \"doesn't load\", do you have an error message? ",
"Thanks @SaulLu\r\nyes, it took 4 hours to convert.\r\nSo, I will close this issue"
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
### System Info
I've created a new vocab using sentencePiece PBE, I trained an xlm-roberta-base from scratch using [run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py). When I tried loading the tokenizer using AutoTokanizer or XLMRobertaTokenizerFast it takes a long time and doesn't load, while I load it using XLMRobertaTokenizer is works well.
I noticed this problem when I tried to finetune the model on ner using the official ner script [run_ner.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py) which works only on fast tokanizer.
I was wondering how to convert it to a fast tokenizer.
### Who can help?
@SaulLu @sgu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I built the vocab using this command
spm_train --input="merged_vocab_data.txt" --model_prefix=sentencepiece.bpe --vocab_size=250002 --character_coverage=0.9995 --model_type=bpe --pad_id=0 --eos_id=1 --unk_id=2 --bos_id=-1
I used this vocab to initial the training model using the official code run_mlm.py
### Expected behavior
How to convert the sentencepiece BPE slow vocab to fast
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19395/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19394
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19394/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19394/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19394/events
|
https://github.com/huggingface/transformers/issues/19394
| 1,400,330,830
|
I_kwDOCUB6oc5Td1pO
| 19,394
|
~7% drop in performance is noticed for huggingface GPT2 model
|
{
"login": "rraminen",
"id": 62723901,
"node_id": "MDQ6VXNlcjYyNzIzOTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/62723901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rraminen",
"html_url": "https://github.com/rraminen",
"followers_url": "https://api.github.com/users/rraminen/followers",
"following_url": "https://api.github.com/users/rraminen/following{/other_user}",
"gists_url": "https://api.github.com/users/rraminen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rraminen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rraminen/subscriptions",
"organizations_url": "https://api.github.com/users/rraminen/orgs",
"repos_url": "https://api.github.com/users/rraminen/repos",
"events_url": "https://api.github.com/users/rraminen/events{/privacy}",
"received_events_url": "https://api.github.com/users/rraminen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello @rraminen, could you mention the two versions of `transformers` between which you see the difference in performance?\r\nBy performance, do you mean performance in metrics or performance in processing power/speed of iteration?\r\nThank you.",
"Hi @LysandreJik, thank you for your response. \r\n\r\nThe performance metric I am looking at is **stable_train_samples_per_second**.\r\n\r\nThe transformers version before performance drop is 4.19.0.dev0.\r\n\r\nPerformance drop is noticed 4.22.0.dev0 and 4.23.0.dev0 transformers versions.",
"@rraminen , stable_train_samples_per_second is only in the ROCm fork of HF transformers. This equates to train_samples_per_second with a warmup period. ",
"So I get the full context: is this happening only on ROCm hardware and using the fork from `ROCmSoftwarePlatform`, or is it happening across the library?\r\n\r\nI'm trying to understand if it's link to this repository or to the fork. Thanks!",
"We observed this on ROCm hardware. @rraminen , can you please test on A100 to confirm if the drop is not limited to MI250 ? \r\n\r\nThe perf drop is not happening across the library. Just gpt2. ",
"The perf drop is not observed on A100. ",
"@rraminen , I dont think upstream HF can help much here; this is on AMD to root-cause. Please close this ticket. \r\n\r\nLet's get started with figuring out which commit caused the regression on ROCm, and tracking internally. ",
"I agree @amathews-amd, I don't think we're in a very large capacity to help here. We're happy to follow along however, so please let us know if there's anything we can do to help out.",
"Thank you @LysandreJik, closing this issue. ",
"Just seconding what @LysandreJik said: if we can help in any way to improve support or performance of our software on AMD chips, we'd like to help\r\n\r\nJust ping us"
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
### System Info
platform: ROCm AMD device
python version: 3.7.13
There is a ~7% drop in performance noticed for huggingface GPT2 model after the IFU (https://github.com/ROCmSoftwarePlatform/transformers/pull/15) on https://github.com/ROCmSoftwarePlatform/transformers repository.
@patil-suraj, @patrickvonplaten, could you please help me in finding the change in transformers that is responsible for the drop in performance?
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Command used to run the model:
python3 -m torch.distributed.launch --nproc_per_node=8 transformers/examples/pytorch/language-modeling/run_clm.py --output_dir output --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --label_smoothing 0.1 --logging_steps 1 --logging_dir log --fp16 --dataloader_num_workers 1 --skip_memory_metrics --per_device_train_batch_size=8 --overwrite_output_dir --max_steps 150
### Expected behavior
I was expecting to see similar or better performance of the model after IFU on Aug 9, 2022.
I also tried with the recent commits after Aug 9, 2022. Those seem to worsen the performance much more.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19394/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19393
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19393/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19393/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19393/events
|
https://github.com/huggingface/transformers/pull/19393
| 1,400,287,843
|
PR_kwDOCUB6oc5AWSyx
| 19,393
|
Change link of repojacking vulnerable link
|
{
"login": "Ilaygoldman",
"id": 29836366,
"node_id": "MDQ6VXNlcjI5ODM2MzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/29836366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ilaygoldman",
"html_url": "https://github.com/Ilaygoldman",
"followers_url": "https://api.github.com/users/Ilaygoldman/followers",
"following_url": "https://api.github.com/users/Ilaygoldman/following{/other_user}",
"gists_url": "https://api.github.com/users/Ilaygoldman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ilaygoldman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ilaygoldman/subscriptions",
"organizations_url": "https://api.github.com/users/Ilaygoldman/orgs",
"repos_url": "https://api.github.com/users/Ilaygoldman/repos",
"events_url": "https://api.github.com/users/Ilaygoldman/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ilaygoldman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Hello from Hacktoberfest :)
# What does this PR do?
The link to https://github.com/vasudevgupta7/bigbird is vulnerable to repojacking (it redirects to the orignial project that changed name), you should change the link to the current name of the project. if you won't change the link, an attacker can open the linked repository and attacks users that trust your links
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19393/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19393",
"html_url": "https://github.com/huggingface/transformers/pull/19393",
"diff_url": "https://github.com/huggingface/transformers/pull/19393.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19393.patch",
"merged_at": 1665090399000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19392
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19392/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19392/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19392/events
|
https://github.com/huggingface/transformers/pull/19392
| 1,400,156,787
|
PR_kwDOCUB6oc5AV2Nx
| 19,392
|
Stop relying on huggingface_hub's private methods
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
MEMBER
| null |
Updates the `move_cache` method to stop relying on `huggingface_hub`'s private methods.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19392/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19392",
"html_url": "https://github.com/huggingface/transformers/pull/19392",
"diff_url": "https://github.com/huggingface/transformers/pull/19392.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19392.patch",
"merged_at": 1665407973000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19391
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19391/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19391/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19391/events
|
https://github.com/huggingface/transformers/issues/19391
| 1,400,129,016
|
I_kwDOCUB6oc5TdEX4
| 19,391
|
T5tokenizer.pre_trained("t5-small") is not callable whereas AutoTokenizer worked fine
|
{
"login": "mellow-d",
"id": 74917668,
"node_id": "MDQ6VXNlcjc0OTE3NjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/74917668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mellow-d",
"html_url": "https://github.com/mellow-d",
"followers_url": "https://api.github.com/users/mellow-d/followers",
"following_url": "https://api.github.com/users/mellow-d/following{/other_user}",
"gists_url": "https://api.github.com/users/mellow-d/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mellow-d/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mellow-d/subscriptions",
"organizations_url": "https://api.github.com/users/mellow-d/orgs",
"repos_url": "https://api.github.com/users/mellow-d/repos",
"events_url": "https://api.github.com/users/mellow-d/events{/privacy}",
"received_events_url": "https://api.github.com/users/mellow-d/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @mellow-d πΒ Having a popular project like `transformers` means we get many support and feature requests β if we want to maximize how much we help the community, the community has to help us stay productive π\r\n\r\nTo that end, please share a *short* script where the issue is clearly reproducible on *any* computer. Thank you π€",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19391/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19390
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19390/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19390/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19390/events
|
https://github.com/huggingface/transformers/pull/19390
| 1,400,107,753
|
PR_kwDOCUB6oc5AVrmL
| 19,390
|
add ONNX support for swin transformer
|
{
"login": "bibhabasumohapatra",
"id": 68384968,
"node_id": "MDQ6VXNlcjY4Mzg0OTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/68384968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bibhabasumohapatra",
"html_url": "https://github.com/bibhabasumohapatra",
"followers_url": "https://api.github.com/users/bibhabasumohapatra/followers",
"following_url": "https://api.github.com/users/bibhabasumohapatra/following{/other_user}",
"gists_url": "https://api.github.com/users/bibhabasumohapatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bibhabasumohapatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bibhabasumohapatra/subscriptions",
"organizations_url": "https://api.github.com/users/bibhabasumohapatra/orgs",
"repos_url": "https://api.github.com/users/bibhabasumohapatra/repos",
"events_url": "https://api.github.com/users/bibhabasumohapatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/bibhabasumohapatra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks \nGreat start on Huggingface, hope to keep contributing more. "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your great contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same person ---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Addresses #16308
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
it was already addressed in PR #18171 (but was mistakenly closed by me, Sorry for the repeat PR)
@lewtun @ChainYo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19390/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19390/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19390",
"html_url": "https://github.com/huggingface/transformers/pull/19390",
"diff_url": "https://github.com/huggingface/transformers/pull/19390.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19390.patch",
"merged_at": 1665149004000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19389
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19389/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19389/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19389/events
|
https://github.com/huggingface/transformers/pull/19389
| 1,400,082,931
|
PR_kwDOCUB6oc5AVmPs
| 19,389
|
Fix gather for metrics in summarization example
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the failing test on the summarization no_trainer for now, eventually this API will be doable but it's not there yet :)
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19389/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19389",
"html_url": "https://github.com/huggingface/transformers/pull/19389",
"diff_url": "https://github.com/huggingface/transformers/pull/19389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19389.patch",
"merged_at": 1665146165000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19388
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19388/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19388/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19388/events
|
https://github.com/huggingface/transformers/pull/19388
| 1,400,061,224
|
PR_kwDOCUB6oc5AVhga
| 19,388
|
removed dependency on bart tokenizer(slow/fast) LED
|
{
"login": "Infrared1029",
"id": 60873139,
"node_id": "MDQ6VXNlcjYwODczMTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/60873139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Infrared1029",
"html_url": "https://github.com/Infrared1029",
"followers_url": "https://api.github.com/users/Infrared1029/followers",
"following_url": "https://api.github.com/users/Infrared1029/following{/other_user}",
"gists_url": "https://api.github.com/users/Infrared1029/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Infrared1029/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Infrared1029/subscriptions",
"organizations_url": "https://api.github.com/users/Infrared1029/orgs",
"repos_url": "https://api.github.com/users/Infrared1029/repos",
"events_url": "https://api.github.com/users/Infrared1029/events{/privacy}",
"received_events_url": "https://api.github.com/users/Infrared1029/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for the feedback! ill fix everything and open a brand new one. "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
removes the dependency between LED tokenizer and Bart's (slow version)
Fixes # (issue)
follows hugging's face philosophy of Do Repeat Yourself
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19388/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19388",
"html_url": "https://github.com/huggingface/transformers/pull/19388",
"diff_url": "https://github.com/huggingface/transformers/pull/19388.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19388.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19387
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19387/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19387/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19387/events
|
https://github.com/huggingface/transformers/issues/19387
| 1,400,059,634
|
I_kwDOCUB6oc5Tczby
| 19,387
|
Documentation of Adafactor is at odds with Google implementations
|
{
"login": "martiansideofthemoon",
"id": 8805240,
"node_id": "MDQ6VXNlcjg4MDUyNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8805240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/martiansideofthemoon",
"html_url": "https://github.com/martiansideofthemoon",
"followers_url": "https://api.github.com/users/martiansideofthemoon/followers",
"following_url": "https://api.github.com/users/martiansideofthemoon/following{/other_user}",
"gists_url": "https://api.github.com/users/martiansideofthemoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/martiansideofthemoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martiansideofthemoon/subscriptions",
"organizations_url": "https://api.github.com/users/martiansideofthemoon/orgs",
"repos_url": "https://api.github.com/users/martiansideofthemoon/repos",
"events_url": "https://api.github.com/users/martiansideofthemoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/martiansideofthemoon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Sorry I'll skip this, I'm not very well versed in that area.\r\n\r\nWhat led you to ping me if it's not too much to ask ? (Since maybe there are better people to ping here).",
"hi @Narsil thanks and no worries! I didn't find a section on optimizers in the issue builder, so I pinged the people in the closest two areas (trainer and pipeline). I am guessing \"trainer\" / @sgugger may be better able to answer the issue.",
"Transformers is not a library of optimizers, so you should really use an implementation of `Adafactor` form somewhere else that suites your need. It will be deprecated and removed in future versions :-) (Note that it comes from fairseq originally, so that's probably the reason you have comments at odds with T5x)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### System Info
- `transformers` version: 4.22.0
- Platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, two RTX8000
- Using distributed or parallel set-up in script?: Yes, DDP via HuggingFace accelerate
### Who can help?
documentation of Adafactor: @sgugger @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The documentation of Adafactor seems to be at odds with the Google implementation in T5X / PaLM. I've found these hyperparameters to be critical while optimizing HuggingFace transformers for metric learning tasks. Specifically the documentation ([link](https://huggingface.co/docs/transformers/main_classes/optimizer_schedules#transformers.Adafactor)) says `Use scale_parameter=False` and `Additional optimizer operations like gradient clipping should not be used alongside Adafactor`.
However, in T5X the default hyperparameter is set to `True` and is not modified in the config files (https://github.com/google-research/t5x/blob/83046e22750635f76c7e600f01c0a002915b52b8/t5x/adafactor.py#L199).
Similarly, PaLM used `scale_parameter` with a constant learning rate,
```
Optimizer β ... This is effectively equivalent to Adam (Kingma & Ba, 2014) with βparameter scaling,β
which scales the learning rate by the root-mean-square of the parameter matrix. Because the weight
initialization is proportional to 1/βn, the effect of this is similar to the manual scaling down of Adam
learning rate as in Brown et al. (2020). However, parameter scaling has the benefit that parameter
matrices which operate at different scales (the embeddings and layer norm scales) do not have their
learning rate scaled down at the same rate.... We use an Adafactor learning rate of 10β2 for the first 10,000
steps, which is then decayed at a rate of 1/βk, where k is the step number. We train with momentum
of Ξ²1 = 0.9 .... We use global norm gradient clipping (Pascanu et al. (2012)) with a value of 1.0 for all
models...
```
Overall, consistent with the Google recommendations, the following hyperparameters worked well for me:
```
optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=False, warmup_init=False, lr=float(args.learning_rate))
...
accelerator.clip_grad_norm_(model.parameters(), 1.0)
```
### Expected behavior
N/A (documentation fix)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19387/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19386
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19386/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19386/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19386/events
|
https://github.com/huggingface/transformers/issues/19386
| 1,400,044,459
|
I_kwDOCUB6oc5Tcvur
| 19,386
|
Different Embeddings values on different OS
|
{
"login": "samarthsarin",
"id": 40137295,
"node_id": "MDQ6VXNlcjQwMTM3Mjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/40137295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samarthsarin",
"html_url": "https://github.com/samarthsarin",
"followers_url": "https://api.github.com/users/samarthsarin/followers",
"following_url": "https://api.github.com/users/samarthsarin/following{/other_user}",
"gists_url": "https://api.github.com/users/samarthsarin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samarthsarin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samarthsarin/subscriptions",
"organizations_url": "https://api.github.com/users/samarthsarin/orgs",
"repos_url": "https://api.github.com/users/samarthsarin/repos",
"events_url": "https://api.github.com/users/samarthsarin/events{/privacy}",
"received_events_url": "https://api.github.com/users/samarthsarin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### System Info
Sentence Transformers Version - 2.2.0
Platform - Windows 10
Python Version - 3.8.5
I am trying to create word embeddings for a couple of words and the same embeddings are not getting generated on different OS machines. I have checked it on Windows and Linux machines. I am trying to perform clustering on text embeddings and te embeddings are not the same for a particular word hence the overall clusters are not the same. I am doing all development on my windows machine and the final deployment is on AWS EC2 instance which is a Linux machine. The results of embeddings are the same on both machines. Can you please help me solve this issue? I have tried setting all the seed values but still then are not the same. There is a very small variation that starts at the 4th or 5th decimal position in the embeddings array.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from sentence_transformers import SentenceTransformer, util
import random
import numpy as np
import torch
import os
random.seed(42)
np.random.seed(42)
torch.manual_seed(42)
torch.cuda.manual_seed_all(42)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
os.environ['PYTHONHASHSEED'] = str(42)
torch.use_deterministic_algorithms(True)
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-v2')
embeddings = model.encode(sentences)
model.encode("customer experience")
### Expected behavior
The embedding created doesn't have exactly same values across different machines. Specially for different OS Machines. The value starts to differ at 4th or 5th positions of every element. Here is the embeddings array of my machine for the word "customer experience"
array([-5.63166857e-01, -4.05957282e-01, 3.00267637e-01, -2.46767655e-01,
4.89773035e-01, -1.94317810e-02, 1.80651009e-01, 8.92449439e-01,
-1.74235195e-01, 3.29178236e-02, -1.19764984e-01, 2.58512050e-01,
1.51172578e+00, -5.46738386e-01, -1.15303159e-01, -5.24251983e-02,
-2.24761695e-01, 3.28272223e-01, 4.98460889e-01, -8.20172966e-01,
-1.17172766e+00, -9.98448491e-01, 3.21752965e-01, 3.72964174e-01,
1.82584435e-01, -4.08045053e-01, -2.02570185e-01, -4.01083052e-01,
-1.54582113e-01, 6.08542264e-02, 3.55301678e-01, 1.58671722e-01,
-4.71475840e-01, -4.93791938e-01, 1.62821263e-04, 3.33021164e-01,
2.97434449e-01, 3.72983813e-01, -5.82175553e-01, -8.59432593e-02,
-1.84757441e-01, -5.53481221e-01, 6.05549157e-01, -1.52354419e-01,
-8.89008582e-01, -1.22463606e-01, -6.02528095e-01, -1.82574391e-01,
3.01688969e-01, 6.89519763e-01, 2.30612442e-01, 6.26742125e-01,
8.43013823e-02, -3.03132862e-01, -1.85130581e-01, 5.28024077e-01,
6.71206862e-02, -9.32246521e-02, -4.03505266e-02, -4.49038267e-01,
5.06386906e-03, 7.86191404e-01, -8.70651156e-02, -7.72568226e-01,
-1.32925123e-01, -1.24123693e-01, 3.29535365e-01, -5.11285424e-01,
-5.65095618e-02, 9.33079541e-01, 3.53344619e-01, -5.66991568e-01,
2.39370614e-01, 6.86836958e-01, -1.44293070e+00, -2.73904860e-01,
-1.90752760e-01, 5.77968955e-01, -3.78967732e-01, -2.59176493e-01,
2.76730835e-01, -5.14467835e-01, 1.06894684e+00, -4.06756431e-01,
-6.92828238e-01, -2.19953716e-01, 4.77855325e-01, -5.88070691e-01,
5.13936020e-02, 2.48879939e-01, -4.67677772e-01, -2.15098113e-01,
-1.09672315e-01, 1.01601869e-01, -2.71980494e-01, 4.15393680e-01,
2.42622405e-01, 1.73546404e-01, -1.73137829e-01, 9.69614685e-02,
4.23627317e-01, -9.35343504e-02, 8.40337425e-02, -3.80988598e-01,
2.08486021e-01, 5.14860749e-01, 3.26781601e-01, -5.36286473e-01,
-3.18198889e-01, 8.19383442e-01, -6.75107002e-01, -1.86185926e-01,
3.88082922e-01, 8.55610073e-01, -7.86133289e-01, 3.95356789e-02,
-2.44248822e-01, 1.14838436e-01, 6.87963545e-01, -9.37253654e-01,
1.19670846e-01, 2.22856849e-02, -1.01163872e-02, 2.25836709e-01,
8.92986879e-02, -6.63402498e-01, 5.70526302e-01, 6.88406408e-01,
-8.66231248e-02, -4.10765529e-01, 5.30590117e-01, -7.02219427e-01,
-3.93625051e-01, 6.24131560e-01, 1.48762420e-01, 8.14396262e-01,
4.03758168e-01, -4.09283876e-01, -1.13471504e-02, 1.74081907e-01,
2.16557682e-01, -8.00780594e-01, 3.03449005e-01, -2.27454484e-01,
-1.42966017e-01, -5.93980193e-01, 6.39644504e-01, -4.82465982e-01,
-5.32015800e-01, -9.92556393e-01, 6.19081676e-01, 1.07305683e-01,
-1.31213859e-01, -1.93007499e-01, 1.17079806e+00, 2.76987970e-01,
-9.27469432e-01, 4.39499795e-01, -4.15544622e-02, 7.88270384e-02,
3.29236805e-01, 3.67188096e-01, -1.04401684e+00, 3.53199422e-01,
2.66258687e-01, 7.28520513e-01, -1.70863360e-01, -3.29261243e-01,
-1.86119117e-02, -3.16396415e-01, 1.98385924e-01, -3.98931444e-01,
-2.50344127e-01, 7.89347351e-01, 2.74530977e-01, 3.58546704e-01,
-3.60908270e-01, 4.97751117e-01, -2.81880677e-01, 1.68201163e-01,
-1.12762606e+00, 7.02131689e-01, 1.80761516e-01, -9.53825295e-01,
3.74447078e-01, -3.55577737e-01, 8.39326233e-02, -7.67105103e-01,
-8.43731999e-01, -1.86966315e-01, 5.03540993e-01, -6.08295083e-01,
-3.00569564e-01, -1.36414242e+00, -4.82496992e-02, -9.76607054e-02,
-6.12891853e-01, 1.57747135e-01, -2.03161985e-01, 2.40768135e-01,
6.33511603e-01, 2.32761055e-01, -1.51648432e-01, -3.39404374e-01,
2.62024403e-01, -4.33223426e-01, 1.16399661e-01, -7.55017877e-01,
2.25884423e-01, -3.73176008e-01, -3.69134128e-01, -3.18936348e-01,
-1.70973599e-01, 7.32566595e-01, 4.68904078e-01, 7.00135976e-02,
-3.62482786e-01, -2.02929229e-01, -7.19937533e-02, 2.56802320e-01,
3.79254043e-01, 6.80404246e-01, 4.17938679e-01, 3.91916335e-01,
-4.78704631e-01, 6.18772432e-02, 3.69294941e-01, 2.43110564e-02,
-2.21559495e-01, -6.37414038e-01, 4.22997415e-01, 2.84579862e-02,
1.39831871e-01, -7.43579507e-01, 2.52516031e-01, 1.08011149e-01,
3.73635620e-01, 1.69237405e-01, -1.94794923e-01, 4.08671081e-01,
-5.18766701e-01, 3.21041405e-01, -3.61130059e-01, 9.24525499e-01,
2.80599803e-01, -5.23387730e-01, -9.23230588e-01, 2.09240839e-01,
5.50950229e-01, -5.63352942e-01, -4.63511765e-01, 2.38961935e-01,
3.58597219e-01, 4.27797139e-01, -1.00327037e-01, -1.08362997e+00,
1.55897349e-01, 5.38530573e-02, 1.59043074e-03, 2.29418337e-01,
-5.35291284e-02, -1.12637460e-01, 2.65441805e-01, 4.49611723e-01,
3.90090346e-01, -1.42261416e-01, -7.70705462e-01, 1.08629473e-01,
5.40238500e-01, 1.08955741e+00, -5.29613614e-01, -5.03211975e-01,
3.90169293e-01, 9.20682132e-01, 6.66368484e-01, -3.91029358e-01,
-3.09388995e-01, 2.70938456e-01, 6.76514268e-01, -3.87805164e-01,
-1.60892338e-01, -3.64872932e-01, 3.67217273e-01, -7.62496114e-01,
7.96184301e-01, -4.87817109e-01, -9.04241800e-01, 5.17966866e-01,
-1.11159825e+00, 8.57870877e-02, 8.98796916e-02, 3.31583843e-02,
2.30660737e-01, -3.57683510e-01, 1.25084507e+00, -6.78460658e-01,
7.95050085e-01, 9.12836134e-01, -3.08217525e-01, -4.36114669e-02,
3.08174826e-02, -3.00375223e-01, 4.11211967e-01, 1.09019957e-01,
7.06879079e-01, -6.82136357e-01, 5.54503620e-01, -1.12970269e+00,
-8.21152806e-01, -1.34905732e+00, 3.00113320e-01, -5.02252460e-01,
-2.98326731e-01, -6.62151694e-01, 1.02041280e+00, 1.64372265e-01,
1.27767578e-01, 1.05911744e+00, 4.48069215e-01, 3.38572681e-01,
-1.08860053e-01, -4.10779119e-01, -2.82041848e-01, 1.19134068e+00,
1.02312341e-02, -4.56356674e-01, 1.92146748e-01, 3.40512484e-01,
-4.04280692e-01, -6.11404777e-01, 5.63679859e-02, 4.72349763e-01,
4.93698537e-01, -4.36762571e-01, -1.56004876e-02, 2.46875226e-01,
-1.43379673e-01, 3.10023427e-02, 3.13399971e-01, 2.04513907e-01,
-8.23624253e-01, 1.72084451e-01, 4.49703097e-01, -9.49652433e-01,
1.19886644e-01, -4.77594048e-01, 5.51294923e-01, -7.20850348e-01,
-4.27250206e-01, -4.53100443e-01, 8.13941360e-01, 4.01361167e-01,
6.83571458e-01, -3.42129886e-01, -7.66427994e-01, -3.53065670e-01,
5.49451828e-01, 1.82685345e-01, -1.86077744e-01, -1.42353363e-02,
2.29258999e-01, -3.30613971e-01, 3.69689107e-01, -6.29568338e-01,
7.90782347e-02, 3.44798952e-01, 5.59364378e-01, 7.05829799e-01,
-9.61028263e-02, -1.39723748e-01, -2.31106445e-01, 2.35272795e-01,
-6.72725201e-01, -1.37946084e-02, -1.04533529e+00, -5.14720857e-01,
-6.02638245e-01, 1.42247796e-01, 1.38257787e-01, -3.10868174e-02,
1.48533672e-01, -2.18283951e-01, -4.00203288e-01, -5.81396222e-01,
1.10336840e+00, 1.29402208e+00, 1.06964624e+00, -3.32895130e-01,
2.55944878e-01, 6.79058790e-01, 3.22150648e-01, -1.64049804e-01,
-9.84220207e-02, 6.52461171e-01, 1.86710641e-01, 2.99713403e-01,
-5.97481191e-01, -9.41333696e-02, -9.03365016e-02, 9.17031825e-01,
6.96043000e-02, 3.91068816e-01, 9.05843750e-02, -1.76928818e-01,
8.88674974e-01, 6.19346559e-01, -5.14562845e-01, -4.47102636e-01,
2.60381103e-01, -1.22727379e-01, -6.05612040e-01, 2.77419269e-01,
-2.34546542e-01, -9.54378620e-02, 6.49136305e-03, -4.91520852e-01,
8.34568143e-01, -2.58982517e-02, -2.86573529e-01, -6.15404367e-01,
-1.51788199e+00, 3.47156405e-01, -8.39735866e-01, 3.24092031e-01,
6.57103062e-01, 6.23090267e-01, 2.63404757e-01, -4.45135310e-02,
9.08290148e-01, 1.18319124e-01, 8.70594263e-01, 6.80169523e-01,
-4.84604776e-01, -7.03717947e-01, -1.89168632e-01, 1.16403615e+00,
-3.50110173e-01, -4.15479571e-01, -9.21172857e-01, -2.33189672e-01,
6.42113864e-01, 8.00730109e-01, 3.99987459e-01, 3.83187056e-01,
4.83411551e-01, -4.20992970e-02, 5.06903112e-01, 7.40851760e-01,
9.11108702e-02, 6.55519247e-01, 7.62610734e-01, 1.12601042e-01,
-4.01560158e-01, -2.08203390e-01, -4.87336189e-01, 5.74378014e-01,
5.99273086e-01, -5.23595288e-02, -7.59932876e-01, -3.45638156e-01,
6.99717045e-01, -1.51044503e-01, 5.20237088e-01, -3.08910757e-03,
1.49888724e-01, -2.29050353e-01, -4.98495191e-01, 2.51217410e-02,
4.10942405e-01, -1.57569438e-01, 2.43655652e-01, 1.33666843e-02,
3.19108926e-02, 2.01601386e-01, 1.30144671e-01, 2.91789353e-01,
-1.87403232e-01, -1.12883002e-01, 5.42151570e-01, -2.47579753e-01,
5.09843528e-01, -4.74907577e-01, 1.22318432e-01, -8.71497840e-02,
1.10734373e-01, 2.24654555e-01, 7.06339240e-01, -1.18613824e-01,
1.79778591e-01, 6.78329289e-01, -2.88403273e-01, -3.57292056e-01,
9.37119365e-01, 1.15470958e+00, 1.79152638e-01, 1.75601542e-01,
2.84290433e-01, -3.61450374e-01, 2.07007974e-01, 2.91608930e-01,
-6.35592461e-01, -8.93313050e-01, 1.05036795e-01, 8.57329369e-03,
6.08366072e-01, -5.03044486e-01, 3.17721739e-02, -4.24353957e-01,
3.90238464e-01, -3.29834163e-01, -6.89130187e-01, -4.17219624e-02,
-9.35876787e-01, 2.66513348e-01, 3.34133267e-01, -3.65045339e-01,
-6.92205131e-01, -5.72713852e-01, -4.77733314e-01, -5.86308017e-02,
1.98600173e-01, -1.85073182e-01, -5.17492890e-01, 3.38486731e-01,
-4.74322766e-01, 8.16874862e-01, -7.71266043e-01, 8.25465083e-01,
-2.50290662e-01, 7.52730444e-02, -6.25011086e-01, -8.58676061e-02,
-4.33004260e-01, 4.56393622e-02, -2.78941654e-02, -2.53382444e-01,
-8.48090887e-01, -5.19386292e-01, -6.39506280e-01, -5.87998986e-01,
-3.09086069e-02, -4.45444703e-01, 7.53717065e-01, 1.12176526e+00,
-1.47348925e-01, 5.91460109e-01, 1.49989009e-01, 5.84628761e-01,
-7.06241906e-01, 4.73896340e-02, -4.02556092e-01, -3.51079516e-02,
5.82646608e-01, 4.22980964e-01, -1.13974705e-01, 5.19442677e-01,
-4.21998501e-01, 4.76445556e-02, -6.82383329e-02, 9.83098507e-01,
5.77297986e-01, 6.72681808e-01, 4.63875353e-01, -4.40883100e-01,
3.28395277e-01, -4.51458216e-01, -1.08331466e+00, -2.27949128e-01,
-3.48160297e-01, -6.54514432e-01, -1.06261909e+00, 3.78970280e-02,
3.76855463e-01, 1.23420453e+00, -1.54484093e-01, -2.39598811e-01,
-6.96872354e-01, 1.58317983e-01, 3.26650649e-01, 6.56132340e-01,
9.27726999e-02, 1.17278016e+00, 2.04693019e-01, 9.35090780e-02,
-4.41390455e-01, -3.65751505e-01, 1.49403632e-01, -1.13220736e-01,
-1.06763467e-01, -6.80416882e-01, -5.72383285e-01, -1.00686356e-01,
8.13092351e-01, 3.27822149e-01, -6.00021541e-01, 3.44711006e-01,
8.28786194e-02, 1.25907615e-01, 4.17931914e-01, -8.35630968e-02,
5.91417730e-01, 2.51130730e-01, 4.58533823e-01, -1.83726788e-01,
4.93454754e-01, -4.29039717e-01, -6.57490715e-02, 2.03398407e-01,
-4.31751430e-01, 5.68911254e-01, 2.54821964e-02, -4.16832864e-01,
-2.70133823e-01, 5.73930085e-01, -6.77836776e-01, -5.92604160e-01,
-1.24327138e-01, -1.29152715e+00, -3.77081074e-02, -5.18579423e-01,
-2.62488842e-01, -3.72892916e-01, -3.80493939e-01, 7.40116090e-02,
-5.15156910e-02, -7.21140265e-01, -1.39724612e-01, 7.07901493e-02,
-1.12637803e-01, -1.60605922e-01, 1.51501581e-01, 3.13334197e-01,
1.21444154e+00, -2.14568496e-01, -5.66242695e-01, -2.38805786e-01,
-2.13572249e-01, -1.32878691e-01, 2.12020248e-01, 5.40322185e-01,
1.93933874e-01, 4.43719685e-01, 1.48676664e-01, 3.87566030e-01,
-8.89887452e-01, 8.66037533e-02, -2.93432958e-02, -5.26472628e-02,
8.01454112e-02, 3.83317508e-02, 1.04065776e+00, 4.99512762e-01,
-4.15351212e-01, 1.12056828e+00, 4.18051839e-01, 8.18798468e-02,
-1.22060739e-02, -4.64514703e-01, -6.00997984e-01, -1.78236380e-01,
1.37272656e-01, -1.48927256e-01, -3.94253761e-01, -6.18627429e-01,
-8.96688998e-01, 5.76650023e-01, 6.77368343e-02, 4.78950560e-01,
-8.79291445e-03, 1.49765313e-01, 1.85265213e-01, -7.22151637e-01,
4.68619287e-01, 2.87488699e-01, -4.97989774e-01, 3.60051811e-01,
-7.40101188e-02, 7.89022982e-01, -9.01167750e-01, 4.41429734e-01,
-1.04249132e+00, -9.19685781e-01, 1.15506038e-01, -7.13049531e-01,
-6.65355742e-01, -5.30628860e-01, -3.26595902e-01, 2.66646266e-01,
-1.25525951e-01, 5.60440779e-01, 5.07836461e-01, 3.95468861e-01,
9.60432529e-01, 4.94689703e-01, -3.03658307e-01, -1.77312210e-01,
-4.58492279e-01, -7.47409761e-01, 4.59275484e-01, -6.79710865e-01,
3.75889093e-02, 7.20455572e-02, 1.30812436e-01, -4.99181062e-01,
2.22169235e-01, -4.90931898e-01, 4.04202938e-01, -8.05476069e-01,
-6.52545542e-02, -8.90152752e-01, 7.38128006e-01, -7.10134208e-02,
4.61333185e-01, 6.00929521e-02, -8.14593077e-01, 2.95668125e-01,
-3.19611222e-01, -6.16702795e-01, -3.27287138e-01, 5.32396674e-01,
-3.02708775e-01, -3.89988780e-01, 8.80602375e-03, -6.62351489e-01,
4.32329148e-01, 4.50594246e-01, 4.41902071e-01, 4.36784565e-01,
-2.12716207e-01, 6.03905916e-01, 9.52148795e-01, -5.97970843e-01,
8.71068358e-01, -5.62861085e-01, -9.95771408e-01, 4.22280073e-01,
4.24299121e-01, -1.84334852e-02, 5.01072884e-01, -6.66608214e-01,
8.03120807e-02, 2.01032907e-01, 7.90493011e-01, -2.10665435e-01,
3.26374441e-01, -9.52832401e-02, 6.92926943e-01, 5.12748480e-01,
-8.07392776e-01, -5.92466474e-01, 6.91362977e-01, 6.96171284e-01,
-4.52700555e-01, -1.18983597e-01, -7.88870752e-02, -4.05955195e-01,
-1.73313439e-01, 5.43577015e-01, -5.59811592e-01, -6.02401972e-01,
1.25281483e-01, -7.22728595e-02, -9.14074957e-01, 1.59500167e-01,
3.40227425e-01, 1.24806687e-01, -4.74854290e-01, -4.31868196e-01],
dtype=float32)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19386/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19385
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19385/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19385/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19385/events
|
https://github.com/huggingface/transformers/pull/19385
| 1,400,010,612
|
PR_kwDOCUB6oc5AVWim
| 19,385
|
update attention mask handling
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Using the `pipeline` wrapper also works : `pipe = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-medium.en\", device=0)`. "
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Fixes error when using whisper with inference API.
Working script :
```python
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor, AutomaticSpeechRecognitionPipeline, , load_dataset, AutoModel
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-large")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large")
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(task="transcribe", language = "en")
>>> model.config.max_length = 224
>>> pipeline = AutomaticSpeechRecognitionPipeline(
model = model,
tokenizer = processor.tokenizer,
feature_extractor = processor.feature_extractor)
>>> print(pipeline(ds[0]["audio"]["array"]))
{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'}
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19385/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19385",
"html_url": "https://github.com/huggingface/transformers/pull/19385",
"diff_url": "https://github.com/huggingface/transformers/pull/19385.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19385.patch",
"merged_at": 1665154448000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19384
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19384/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19384/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19384/events
|
https://github.com/huggingface/transformers/issues/19384
| 1,399,913,006
|
I_kwDOCUB6oc5TcPou
| 19,384
|
Download pretrained models from a new conda virtualenv with higher python version and higher transformers version
|
{
"login": "sujoung",
"id": 31689453,
"node_id": "MDQ6VXNlcjMxNjg5NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31689453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sujoung",
"html_url": "https://github.com/sujoung",
"followers_url": "https://api.github.com/users/sujoung/followers",
"following_url": "https://api.github.com/users/sujoung/following{/other_user}",
"gists_url": "https://api.github.com/users/sujoung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sujoung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sujoung/subscriptions",
"organizations_url": "https://api.github.com/users/sujoung/orgs",
"repos_url": "https://api.github.com/users/sujoung/repos",
"events_url": "https://api.github.com/users/sujoung/events{/privacy}",
"received_events_url": "https://api.github.com/users/sujoung/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thank you for the report @sujoung, looking into it.",
"Indeed, this will be fixed once https://github.com/huggingface/transformers/pull/19244 is in a release (the release will likely be done Monday or Tuesday)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### System Info
Old
python==3.7
transformers==3.5.0
New
python==3.9
transformers==4.22.2
### Who can help?
@LysandreJik, @NielsRogge
Hello! Nothing really critical I think, but I stumbled upon this message so I share it here. Think this is some kind of legacy handling part, but it works fine. It is just a warning message that I wanted to share.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Download bert-base-uncased on environment with python=3.7 & transformers==3.5
`
from transformers import AutoTokenizer, AutoModelWithLMHead
entk = AutoTokenizer.from_pretrained("distilbert-base-uncased")
enlm = AutoModelWithLMHead.from_pretrained("distilbert-base-uncased")
`
2. Create another virtualenv with python=3.9
3. install transformers==4.22.2
4. Download bert-base-uncased by using the same snippet above
### Expected behavior
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 114 files to the new cache system
0%| | 0/114 [00:00<?, ?it/s]
There was a problem when trying to move your cache:
File "/Users/sujoungbaeck/opt/anaconda3/envs/hubert-api-package__3.9/lib/python3.9/site-packages/transformers/utils/hub.py", line 1128, in <module>
move_cache()
File "/Users/sujoungbaeck/opt/anaconda3/envs/hubert-api-package__3.9/lib/python3.9/site-packages/transformers/utils/hub.py", line 1071, in move_cache
hub_metadata[url] = get_hub_metadata(url, token=token)
File "/Users/sujoungbaeck/opt/anaconda3/envs/hubert-api-package__3.9/lib/python3.9/site-packages/transformers/utils/hub.py", line 996, in get_hub_metadata
huggingface_hub.file_download._raise_for_status(r)
AttributeError: module 'huggingface_hub.file_download' has no attribute '_raise_for_status'
Please file an issue at https://github.com/huggingface/transformers/issues/new/choose and copy paste this whole message and we will do our best to help.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19384/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19383
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19383/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19383/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19383/events
|
https://github.com/huggingface/transformers/issues/19383
| 1,399,875,233
|
I_kwDOCUB6oc5TcGah
| 19,383
|
Run text-classification example with AdaHessian optimizer
|
{
"login": "iTsingalis",
"id": 16863276,
"node_id": "MDQ6VXNlcjE2ODYzMjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/16863276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iTsingalis",
"html_url": "https://github.com/iTsingalis",
"followers_url": "https://api.github.com/users/iTsingalis/followers",
"following_url": "https://api.github.com/users/iTsingalis/following{/other_user}",
"gists_url": "https://api.github.com/users/iTsingalis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iTsingalis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iTsingalis/subscriptions",
"organizations_url": "https://api.github.com/users/iTsingalis/orgs",
"repos_url": "https://api.github.com/users/iTsingalis/repos",
"events_url": "https://api.github.com/users/iTsingalis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iTsingalis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!\r\n\r\ncc @sgugger ",
"> \r\n\r\nSorry for my misplaced post. I think the problem is solved. To be honest I just re-entered the modifications on the original code more carefully and now it seems to be working. You can delete my post or move it to the forum if you find it more appropriate. Sorry for the inconvenience again. "
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
### System Info
torch 1.12.1+cu113
transformers 4.23.0.dev0
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I want to use AdaHessian optimizer in [text-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) example run_glue_no_trainer.py. To do so I have modified the part of the code where the optimizer is selected. That is, instead of this
# Optimizer
# Split weights in two groups, one with weight decay and the other not.
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
and this,
for epoch in range(starting_epoch, args.num_train_epochs):
model.train()
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
# We need to skip steps until we reach the resumed step
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
completed_steps += 1
continue
outputs = model(**batch)
loss = outputs.loss
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
loss = loss / args.gradient_accumulation_steps # Do we need this? backwards does this calculation...
accelerator.backward(loss)
if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
completed_steps += 1
I am using this
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
If args.optimizer == 'AdamW':
optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
elif args.optimizer == 'AdaHessian':
optimizer = AdaHessian(optimizer_grouped_parameters, lr=args.learning_rate)
and this
for epoch in range(starting_epoch, args.num_train_epochs):
model.train()
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
# We need to skip steps until we reach the resumed step
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
completed_steps += 1
continue
# batch = Variable(**batch, requires_grad=True)
def closure(backward=True):
if backward:
optimizer.zero_grad()
outputs = model(**batch)
loss = outputs.loss
if backward:
# loss = Variable(loss, requires_grad=True) # Didn't help
# create_graph=True is necessary for Hessian calculation
accelerator.backward(loss, create_graph=True)
return loss
loss = closure(backward=False)
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
optimizer.step(closure=closure)
lr_scheduler.step()
progress_bar.update(1)
completed_steps += 1
respectively. The AdaHessian is given [here](https://github.com/davda54/ada-hessian/blob/master/ada_hessian.py).
### Expected behavior
Normally, it should continue training but
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
is returned by
h_zs = torch.autograd.grad(grads, params, grad_outputs=zs, only_inputs=True, retain_graph=i < self.n_samples - 1)
in the optimizer's function
@torch.no_grad()
def set_hessian(self):
"""
Computes the Hutchinson approximation of the hessian trace and accumulates it for each trainable parameter.
"""
params = []
for p in filter(lambda p: p.grad is not None, self.get_params()):
if self.state[p]["hessian step"] % self.update_each == 0: # compute the trace only each `update_each` step
params.append(p)
self.state[p]["hessian step"] += 1
if len(params) == 0:
return
if self.generator.device != params[0].device: # hackish way of casting the generator to the right device
self.generator = torch.Generator(params[0].device).manual_seed(2147483647)
grads = [p.grad for p in params]
for i in range(self.n_samples):
zs = [torch.randint(0, 2, p.size(), generator=self.generator, device=p.device) * 2.0 - 1.0 for p in params] # Rademacher distribution {-1.0, 1.0}
h_zs = torch.autograd.grad(grads, params, grad_outputs=zs, only_inputs=True, retain_graph=i < self.n_samples - 1)
for h_z, z, p in zip(h_zs, zs, params):
p.hess += h_z * z / self.n_samples # approximate the expected values of z*(H@z)
The error is returned because the `grads` created by the list `param` do not contain a `_grad_fun`. I suspect that the problem is related to the input of the optimizer (e.g. loss in the backward function). According to this [post](https://discuss.pytorch.org/t/runtimeerror-element-0-of-variables-does-not-require-grad-and-does-not-have-a-grad-fn/11074/43) I have tried for example
loss = Variable(loss, requires_grad=True)
before backwards in the closure which make the script to start running but the accuracy is around 45% and does not improve. Could you please take a look at the problem and make a suggestion to overcome it?
**EDIT:**
I just noticed in the trace back that `lr_scheduler` is mentioned before the error in `torch.autograd.grad`.
```
Traceback (most recent call last):
File "some root/run_glue.py", line 730, in <module>
main()
File "some root/run_glue.py", line 621, in main
optimizer.step(closure=closure)
File "some root/anaconda3/envs/AdaCubic/lib/python3.7/site-packages/accelerate/optimizer.py", line 140, in step
self.optimizer.step(closure)
File "some rootanaconda3/envs/AdaCubic/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
return wrapped(*args, **kwargs)
File "some rootanaconda3/envs/AdaCubic/lib/python3.7/site-packages/torch/optim/optimizer.py", line 113, in wrapper
return func(*args, **kwargs)
File "some root/anaconda3/envs/AdaCubic/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "some root/AdaHessian.py", line 105, in step
self.set_hessian()
File some root/anaconda3/envs/AdaCubic/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "some rootcubicReg/Code/Optimizers/AdaHessian.py", line 87, in set_hessian
retain_graph=i < self.n_samples - 1)
File "some root/anaconda3/envs/AdaCubic/lib/python3.7/site-packages/torch/autograd/__init__.py", line 278, in grad
allow_unused, accumulate_grad=False) # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
0%| | 0/6315 [00:02<?, ?it/s]
```
I was suspecting that something with the `_grad_fn` is happening in accelerator. Thus, commenting
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
makes the optimization procedure to start running which indicates that `_grad_fn` are disabled somehow inside accelerator. Could you please some one suggest a way to overcome this problem?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19383/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.