url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
โ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
โ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/21689
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21689/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21689/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21689/events
|
https://github.com/huggingface/transformers/issues/21689
| 1,590,646,292
|
I_kwDOCUB6oc5ez1YU
| 21,689
|
Make schedulers picklable
|
{
"login": "ViktorooReps",
"id": 56936206,
"node_id": "MDQ6VXNlcjU2OTM2MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/56936206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ViktorooReps",
"html_url": "https://github.com/ViktorooReps",
"followers_url": "https://api.github.com/users/ViktorooReps/followers",
"following_url": "https://api.github.com/users/ViktorooReps/following{/other_user}",
"gists_url": "https://api.github.com/users/ViktorooReps/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ViktorooReps/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ViktorooReps/subscriptions",
"organizations_url": "https://api.github.com/users/ViktorooReps/orgs",
"repos_url": "https://api.github.com/users/ViktorooReps/repos",
"events_url": "https://api.github.com/users/ViktorooReps/events{/privacy}",
"received_events_url": "https://api.github.com/users/ViktorooReps/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for explaining your issue in depth, and happy to review a PR!"
] | 1,676
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### Feature request
Change lambda functions passed to `LambdaLR` in `get_constant_schedule`, `get_constant_schedule_with_warmup`, `get_linear_schedule_with_warmup`, `get_cosine_schedule_with_warmup`, `get_cosine_with_hard_restarts_schedule_with_warmup` and `get_polynomial_decay_schedule_with_warmup` to callable objects.
### Motivation
Python cannot serialize lambda and local functions. Torch created a workaround around this in their `state_dict` method of `LambdaLR` by not returning any non-picklable functions:
```python
...
for idx, fn in enumerate(self.lr_lambdas):
if not isinstance(fn, types.FunctionType):
state_dict['lr_lambdas'][idx] = fn.__dict__.copy()
return state_dict
```
While this approach is fine when LR schedule is constant and deterministic, it makes it impossible to change the schedule mid training dynamically using lambda functions since any changes will not be saved to checkpoints.
In my particular case I wanted to implement a dynamic LR schedule based on evaluation metrics. I've implemented a wrapper around `LambdaLR` that applies transformation `fn: float -> float` to existing LR schedule:
```python
class LambdaWrapper:
def __init__(self, lr_lamda: Callable[[Union[float, int]], float], wrapper_function: Callable[[float], float]):
self._wrapper_function = wrapper_function
self._lr_lambda = lr_lamda
def __call__(self, x: Union[float, int]):
return self._wrapper_function(self._lr_lambda(x))
class DynamicScheduler:
def __init__(self, lr_scheduler: LambdaLR):
self._scheduler = lr_scheduler
def __getattr__(self, item):
# Calling the super class to avoid recursion
return getattr(super(DynamicScheduler, self).__getattribute__('_scheduler'), item)
def wrap_schedule(self, fn: Callable[[float], float]):
"""If you want this object to be picklable, pass only picklable callable objects as `fn`!"""
wrappers_builder = partial(LambdaWrapper, wrapper_function=fn) # wrap in callable object to preserve picklability
self._scheduler.lr_lambdas = list(map(wrappers_builder, self._scheduler.lr_lambdas))
```
I've taken special care to preserve picklability, however, since `LambdaLR` instances created by `transformers` library hold lambda and local functions in them, pickling of `DynamicScheduler` (as well as it's state, which is the same as the wrapped `LambdaLR` state) fails.
While reimplementing dynamic scheduling with lambda functions will allow the `torch` workaround that handles lambda functions in scheduler, the whole point of dynamic scheduling will be lost since the complex dynamically constructed lambdas: `f_n(f_n-1(...f_1(schedule(x))...))` will fall back to their default state: `schedule(x)`.
Here is the callback I use to track evaluation metrics for anyone interested:
```python
def get_warmup_steps(args: TrainingArguments, state: TrainerState) -> int:
return (
args.warmup_steps
if args.warmup_steps > 0
else math.ceil(state.max_steps * args.warmup_ratio)
)
class DecreaseLRTransformer:
def __init__(self, decrease_ratio: float):
if decrease_ratio < 0.0 or decrease_ratio > 1.0:
raise ValueError('Decrease ratio should be within [1.0, 0.0]')
self._decrease_ratio = decrease_ratio
def __call__(self, lr: float):
return self._decrease_ratio * lr
# Developer notice (may change in the future versions of transformers):
# All kwargs have the following fields set: model, tokenizer, optimizer, lr_scheduler, train_dataloader, eval_dataloader
class LRDecreaseCallback(TrainerCallback):
"""
A [`TrainerCallback`] that handles learning rate decrease based on evaluation metrics.
"""
def __init__(self, decrease_ratio: float, patience: int, *, decrease_on_warmup: bool = False, decrease_threshold: float = 0.0):
self._transformer = DecreaseLRTransformer(decrease_ratio)
self._patience = patience
self._decrease_on_warmup = decrease_on_warmup
self._decrease_threshold = decrease_threshold
self._failed_checks = 0
def _metric_improved(self, new_metric: float, old_metric: float, *, greater_is_better: bool = True) -> bool:
operator = np.greater if greater_is_better else np.less
return operator(new_metric, old_metric) and abs(new_metric - old_metric) > self._decrease_threshold
def check_metric_value(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, metric_value: float):
# best_metric is set by code for load_best_model
no_metric = (state.best_metric is None)
warmup_steps = get_warmup_steps(args, state)
skip_warmup = (self._decrease_on_warmup and warmup_steps >= state.global_step)
if skip_warmup:
return
if no_metric or self._metric_improved(metric_value, state.best_metric, greater_is_better=args.greater_is_better):
self._failed_checks = 0
control.should_save = True
else:
self._failed_checks += 1
def on_train_begin(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
if args.metric_for_best_model is None:
raise ValueError(f"{self.__class__.__name__} requires metric_for_best_model to be defined defined")
if args.evaluation_strategy == IntervalStrategy.NO:
raise ValueError(f"{self.__class__.__name__} requires IntervalStrategy of steps or epoch")
def on_evaluate(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
metrics: Dict[str, float] = kwargs['metrics']
lr_scheduler = kwargs['lr_scheduler']
if not isinstance(lr_scheduler, DynamicScheduler):
logger.warning(f'{self.__class__.__name__} is not compatible with {lr_scheduler.__class__.__name__} scheduler! '
f'Wrap your scheduler with {DynamicScheduler.__class__.__name__} to change LR dynamically. '
f'{self.__class__.__name__} is disabled!')
return
metric_to_check = args.metric_for_best_model
if not metric_to_check.startswith("eval_"):
metric_to_check = f"eval_{metric_to_check}"
metric_value = metrics.get(metric_to_check)
if metric_value is None:
logger.warning(f"{self.__class__.__name__} required metric_for_best_model, "
f"but did not find {metric_to_check} in evaluation metrics. {self.__class__.__name__} is disabled!")
return
self.check_metric_value(args, state, control, metric_value)
if self._failed_checks >= self._patience:
lr_scheduler.wrap_schedule(self._transformer)
self._failed_checks = 0
def on_log(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
logs: Dict[str, float] = kwargs['logs']
logs['lr_decrease_patience'] = (self._patience - self._failed_checks) / self._patience
```
### Your contribution
The simplest and the cleanest workaround would be to make the local functions global:
Intead of:
```python
def get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps, last_epoch=-1):
def lr_lambda(current_step: int):
if current_step < num_warmup_steps:
return float(current_step) / float(max(1, num_warmup_steps))
return max(
0.0, float(num_training_steps - current_step) / float(max(1, num_training_steps - num_warmup_steps))
)
return LambdaLR(optimizer, lr_lambda, last_epoch)
```
Do this:
```python
def _linear_schedule_with_warmup_step(current_step: int, *, num_warmup_steps: int, num_training_steps: int) -> float:
if current_step < num_warmup_steps:
return float(current_step) / float(max(1, num_warmup_steps))
return max(
0.0, float(num_training_steps - current_step) / float(max(1, num_training_steps - num_warmup_steps))
)
def get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps, last_epoch=-1):
schedule = partial(_linear_schedule_with_warmup_step, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps)
return LambdaLR(optimizer, schedule, last_epoch)
```
When created with global functions, partial function are picklable:
```python
>>>from functools import partial
>>>import pickle
>>>def f(x):
... print(x)
>>>with open('f.pkl', 'wb') as file:
... pickle.dump(partial(f, x='Dog'), file)
>>>with open('f.pkl', 'rb') as file:
... unpickled_f = pickle.load(file)
>>>unpickled_f()
Dog
```
The fix is straightforward and I can create a PR. Nonetheless, it would be my first contribution so I might need some help along the way.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21689/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21689/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21688
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21688/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21688/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21688/events
|
https://github.com/huggingface/transformers/pull/21688
| 1,590,620,315
|
PR_kwDOCUB6oc5KSCI-
| 21,688
|
[`bnb`] fix `bnb` decoders bug
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Currently on the `main` branch, there is a silent bug with `bnb` and encoder-decoder models leading to some modules not being converted in `int8` for these models and hurting users that are using the `main` branch - such as `peft`: https://github.com/huggingface/peft/issues/108
With https://github.com/huggingface/transformers/pull/21579 being introduced, the check to know if we should keep the module as `nn.Linear` has been slightly changed and [being more robust ](https://github.com/huggingface/transformers/blob/7f1cdf18958efef6339040ba91edb32ae7377720/src/transformers/utils/bitsandbytes.py#L126).
Before #21579 the function `get_keys_to_not_convert` used to return `['decoder', 'lm_head', 'wo']` which is wrong, and leading to all decoder layers being not converted in int8 with the new check as mentioned above.
This PR fixes the bug that was in this function, and adds a new test to make sure this will never happen
Fixes: https://github.com/huggingface/peft/issues/108
cc @sgugger @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21688/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21688",
"html_url": "https://github.com/huggingface/transformers/pull/21688",
"diff_url": "https://github.com/huggingface/transformers/pull/21688.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21688.patch",
"merged_at": 1676895719000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21687
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21687/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21687/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21687/events
|
https://github.com/huggingface/transformers/issues/21687
| 1,590,581,014
|
I_kwDOCUB6oc5ezlcW
| 21,687
|
Initialize OPT 175B model for a long time
|
{
"login": "larry-fuy",
"id": 1881605,
"node_id": "MDQ6VXNlcjE4ODE2MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1881605?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/larry-fuy",
"html_url": "https://github.com/larry-fuy",
"followers_url": "https://api.github.com/users/larry-fuy/followers",
"following_url": "https://api.github.com/users/larry-fuy/following{/other_user}",
"gists_url": "https://api.github.com/users/larry-fuy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/larry-fuy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/larry-fuy/subscriptions",
"organizations_url": "https://api.github.com/users/larry-fuy/orgs",
"repos_url": "https://api.github.com/users/larry-fuy/repos",
"events_url": "https://api.github.com/users/larry-fuy/events{/privacy}",
"received_events_url": "https://api.github.com/users/larry-fuy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The line takes a very long time because there are 176 billions parameters to initialize. The initialization is also performed several times, which is a bug we recently fixed. If you use the latest main you might see it takes a bit less time.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
### System Info
Recently I am trying to train OPT 175B (facebook/opt-175b) and found my code needs almost 10 hours to initialize the model weights before starting training. Actually my code is pretty simple
```
...
config = PretrainedConfig.from_json_file('175b.json')
model = OPTForCausalLM(config)
...
```
Is there any issue of my code or I have to configure something to accelerate the initialization? @ArthurZucker , @younesbelkada
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I already posted the main part to cause the problem in problem description.
### Expected behavior
I think the initialization computation should not be so long.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21687/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21686
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21686/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21686/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21686/events
|
https://github.com/huggingface/transformers/issues/21686
| 1,590,548,242
|
I_kwDOCUB6oc5ezdcS
| 21,686
|
Loading T5ForConditionalGeneration model by TFT5ForConditionalGeneration using from_pt=True
|
{
"login": "FrozenWolf-Cyber",
"id": 57902078,
"node_id": "MDQ6VXNlcjU3OTAyMDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57902078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrozenWolf-Cyber",
"html_url": "https://github.com/FrozenWolf-Cyber",
"followers_url": "https://api.github.com/users/FrozenWolf-Cyber/followers",
"following_url": "https://api.github.com/users/FrozenWolf-Cyber/following{/other_user}",
"gists_url": "https://api.github.com/users/FrozenWolf-Cyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrozenWolf-Cyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrozenWolf-Cyber/subscriptions",
"organizations_url": "https://api.github.com/users/FrozenWolf-Cyber/orgs",
"repos_url": "https://api.github.com/users/FrozenWolf-Cyber/repos",
"events_url": "https://api.github.com/users/FrozenWolf-Cyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrozenWolf-Cyber/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! Thanks for submitting this issue. \r\nIn T5, the encoder and decoder's embedding tokens are shared, and tied. \r\nAs you can see here in the pytorch modeling code : \r\n```python \r\n _keys_to_ignore_on_load_missing = [\r\n r\"encoder.embed_tokens.weight\",\r\n r\"decoder.embed_tokens.weight\",\r\n r\"lm_head.weight\",\r\n ]\r\n```\r\nThis is because the values stored in `shared.weight` are used for these 3 layers. \r\nThe issue is that these layers get filled after initialisation and are then saved. The warning can be safely ignored as the `shared` layer was properly initialized. \r\n\r\n",
"The outputs of the model should be the same however. I can't reproduce your output, and when I run an inference on my side, the generated tokens are the same. Here is a snippet: \r\n```python \r\n>>> from transformers import AutoTokenizer, T5Config, T5ForConditionalGeneration, TFT5ForConditionalGeneration, set_seed\r\n>>> set_seed(0) #set seed for reproducibility \r\n>>> model = T5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\n>>> model.save_pretrained(\"Arthur/T5-pt\")\r\n>>> tf_model = TFT5ForConditionalGeneration.from_pretrained(\"Arthur/T5-pt\", from_pt=True)\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"t5-small\", padding='max_length', truncation=True)\r\n>>> inputs = tokenizer(\"this is a random input\", return_tensors=\"pt\")\r\n>>> model.generate(**inputs)\r\n# tensor([[0, 3, 5, 1]])\r\n```\r\n\r\n```python \r\n>>> inputs = tokenizer(\"this is a random input\", return_tensors=\"tf\")\r\n>>> tf_model.generate(**inputs)\r\n# <tf.Tensor: shape=(1, 4), dtype=int32, numpy=array([[0, 3, 5, 1]], dtype=int32)>\r\n```\r\n\r\nHowever it seems that if you use the a default config like the following: \r\n```python \r\n>>> from transformers import AutoTokenizer, T5Config, T5ForConditionalGeneration, TFT5ForConditionalGeneration, set_seed\r\n>>> set_seed(0) #set seed for reproducibility \r\n>>> distill_config = T5Config(d_model=256, d_kv = 32, d_ff=512, num_heads=4, decoder_start_token_id=0)\r\n>>> model = T5ForConditionalGeneration(config=distill_config)\r\n>>> model.save_pretrained(\"Arthur/T5-pt\")\r\n>>> tf_model = TFT5ForConditionalGeneration.from_pretrained(\"Arthur/T5-pt\", from_pt=True)\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"t5-small\", padding='max_length', truncation=True)\r\n>>> inputs = tokenizer(\"this is a random input\", return_tensors=\"pt\")\r\n>>> model.generate(**inputs)\r\n```\r\nthe outputs do not match. This should not be expected",
"@ArthurZucker Is there any temporary quick fix to this problem? ",
"The quickest fix is the following (tested locally) : `transformers-cli pt-to-tf --model-name \"ArthurZ/T5-pt\"`. This will make sure the conversion and the hidden states match. Will help you debug if there are any issues. In my case, conversion went well and logits match. \r\nUse `transformers-cli pt-to-tf --model-name <path_to_checkpoint_on_hub>`. Your model ( and the tokenizer) need to be updated and you need to be logged in using `huggingface-cli login `.\r\nSee [here](https://huggingface.co/ArthurZ/T5-pt/discussions/1) for an example of the PR that will be automatically created to your repo",
"OKay, the issue stems from the fact that the model is `training`. If you make sure that `model.eval()` is done for both, everything is fixed! The following works. \r\n```python \r\n>>> from transformers import AutoTokenizer, T5Config, T5ForConditionalGeneration, TFT5ForConditionalGeneration, set_seed\r\n>>> set_seed(0) #set seed for reproducibility \r\n>>> distill_config = T5Config(d_model=256, d_kv = 32, d_ff=512, num_heads=4, decoder_start_token_id=0)\r\n>>> model = T5ForConditionalGeneration(config=distill_config).eval()\r\n>>> model.save_pretrained(\"Arthur/T5-pt\")\r\n>>> tf_model = TFT5ForConditionalGeneration.from_pretrained(\"Arthur/T5-pt\", from_pt=True)\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"t5-small\", padding='max_length', truncation=True)\r\n>>> inputs = tokenizer(\"this is a random input\", return_tensors=\"pt\")\r\n>>> model.generate(**inputs)\r\n```\r\n\r\nThis is an expected behaviour, I think I will just update the documentation to make sure it is clearly stated that this can be a discrepancy. Closing this as fixed, unless you still have problems! ๐ ",
"Thank you for the solution @ArthurZucker :)"
] | 1,676
| 1,677
| 1,677
|
NONE
| null |
### System Info
Ran the codes in Colab [here](https://colab.research.google.com/drive/1CJlQ5rVf5b3BysAnP6Y43I2LWl5ExcT2?usp=sharing)
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
I have ran these code in Colab notebook [here](https://colab.research.google.com/drive/1CJlQ5rVf5b3BysAnP6Y43I2LWl5ExcT2?usp=sharing)
### Who can help?
@Rocketknight1 @ArthurZucker @gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
**Code:**
```python
from transformers import T5Config, T5ForConditionalGeneration, TFT5ForConditionalGeneration
distill_config = T5Config(d_model=256, d_kv = 32, d_ff=512, num_heads=4, decoder_start_token_id=0)
model = T5ForConditionalGeneration(config=distill_config)
model.save_pretrained("T5-pt")
distill_config = T5Config(d_model=256, d_kv = 32, d_ff=512, num_heads=4, decoder_start_token_id=0)
model = TFT5ForConditionalGeneration(config=distill_config)
model.from_pretrained("T5-pt", from_pt=True)
```
**Output:**
_The following warnings were observed for` t5-small `model too_
```
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFT5ForConditionalGeneration: ['lm_head.weight', 'encoder.embed_tokens.weight', 'decoder.embed_tokens.weight']
- This IS expected if you are initializing TFT5ForConditionalGeneration from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFT5ForConditionalGeneration from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFT5ForConditionalGeneration were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.
<transformers.models.t5.modeling_tf_t5.TFT5ForConditionalGeneration at 0x7f96b8372dc0>
```
_If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training._
To check in case if the outputs were similar I used the following code to check a sample output:
**Code:**
Pytorch:
```python
tokenizer = AutoTokenizer.from_pretrained('t5-small')
def test(text, model, tokenizer):
tokenized = tokenizer(text, return_tensors='pt', padding='max_length', truncation=True)
print(tokenizer.batch_decode(model.generate(tokenized["input_ids"]).tolist(), skip_special_tokens=True))
test("summarize: i got permission to begin a start up company by my own..</s>", model, tokenizer)
```
TF2.0:
```python
from tensorflow.python.ops.numpy_ops import np_config
np_config.enable_numpy_behavior()
def test(text, model, tokenizer):
tokenized = tokenizer(text, return_tensors='tf', padding='max_length', truncation=True)
print(tokenizer.batch_decode(model.generate(tokenized["input_ids"]).tolist(), skip_special_tokens=True))
test("summarize: i got permission to begin a start up company by my own..</s>", model, tokenizer)
```
Output:
Pytorch: ``` ['cod cod cod cod cod cod cod cod cod cod cod cod cod cod cod cod cod cod cod']```
Tensorflow: ``` ['allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance'] ```
I have ran these above code in Colab notebook [here](https://colab.research.google.com/drive/1CJlQ5rVf5b3BysAnP6Y43I2LWl5ExcT2?usp=sharing)
### Expected behavior
I have trained a T5ForConditionalGeneration model with a custom config using Pytorch and now I am trying to load it into Tensorflow but some weights of the PyTorch model were not used when initializing the TF 2.0 model T5ForConditionalGeneration. I got the same warning when trying with `t5-small` I have ran these following code in Colab notebook [here](https://colab.research.google.com/drive/1CJlQ5rVf5b3BysAnP6Y43I2LWl5ExcT2?usp=sharing)
I would like to know how to "properly" import a `T5ForConditionalGeneration` model that was trained in Pytorch to `TFT5ForConditionalGeneration`?
I have ran these above code in Colab notebook [here](https://colab.research.google.com/drive/1CJlQ5rVf5b3BysAnP6Y43I2LWl5ExcT2?usp=sharing)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21686/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21685
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21685/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21685/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21685/events
|
https://github.com/huggingface/transformers/issues/21685
| 1,590,536,101
|
I_kwDOCUB6oc5ezael
| 21,685
|
`modeling_opt.py` if `previous_key_values` given and `attention_mask==None` the model throws an error.
|
{
"login": "GabrielKP",
"id": 40501279,
"node_id": "MDQ6VXNlcjQwNTAxMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/40501279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GabrielKP",
"html_url": "https://github.com/GabrielKP",
"followers_url": "https://api.github.com/users/GabrielKP/followers",
"following_url": "https://api.github.com/users/GabrielKP/following{/other_user}",
"gists_url": "https://api.github.com/users/GabrielKP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GabrielKP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GabrielKP/subscriptions",
"organizations_url": "https://api.github.com/users/GabrielKP/orgs",
"repos_url": "https://api.github.com/users/GabrielKP/repos",
"events_url": "https://api.github.com/users/GabrielKP/events{/privacy}",
"received_events_url": "https://api.github.com/users/GabrielKP/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey! Thanks for submitting this issue! \r\nPassing attention maks solves the problem, and usually we expect to pass attention masks when you are using the `past_key_values`(for example in generate). It is debatable whether the default behaviour should rely on the past_key_values. \r\nDo you have a specific usage in mind? \r\n\r\nThe following works as expected: \r\n```python \r\nattn = torch.cat((tokenized1[\"attention_mask\"], tokenized2[\"attention_mask\"]), -1)\r\ntext2 = \"bug\"\r\ntokenized2 = tokenizer(text2, return_tensors='pt')\r\nmodel(input_ids=tokenized2[\"input_ids\"], past_key_values=past_key_values,attention_mask =attn)\r\n```\r\nThis way is the expected usage. When training or doing an inference, you should probably be in a for loop where the attention mask is defined based on the entire input. \r\n",
"I agree that manually adding the attention_mask is an easy fix.\r\n\r\nI am using a shared context as `past_key_values` and then computing different model outputs given the context. In that case I save the contexts `past_key_values` and use them later on. It is easy to recompute/save the contexts attention_mask and concat it for every output - but\r\n* OPT model behavior is inconsistent to other model's I have been using (gpt-neo, bloom)\r\n* it is [not documented](https://huggingface.co/docs/transformers/v4.26.1/en/model_doc/opt#transformers.OPTForCausalLM.forward.past_key_values) that the expected usage is passing the `attention_mask` when using `past_key_values`\r\n* the thrown error is not descriptive of the issue\r\n\r\nI do not understand what you mean with \"default behaviour should rely on the past_key_values\" - it seems to me that default behavior is not affected by changing this: line [636](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L636) seems to have exactly the same job that [639 - 642](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L639) has, just that it does not take into account `past_key_values` introducing the deviation of model behavior to other models.\r\n\r\nI can understand if you say that passing `attention_mask` is expected behavior for using `past_key_values`, but maybe that could be mentioned somewhere?",
"Totally agree with you, will open a PR to adress this. I think this was also blocking us from adding the ONNX config for this model! \r\nThanks for this ๐ \r\n"
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.18.0-147.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
## Code
1. Load opt/tokenizer
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "facebook/opt-125m"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
2. Precompute `past_key_values`
```py
text1 = "let's find a"
tokenized1 = tokenizer(text1, return_tensors='pt')
past_key_values = model(**tokenized1, use_cache=True)["past_key_values"]
```
4. Compute another set of values without `attention_mask`
```py
text2 = "bug"
tokenized2 = tokenizer(text2, return_tensors='pt')
model(input_ids=tokenized2["input_ids"], past_key_values=past_key_values)
# error! The mistakenly created an attention_mask that is too small.
```
(try `distilgpt2` and it will work)
## stack trace
```
Traceback (most recent call last):
File "/home/gkressi1/opt/ldet/rate_in-context.py", line 334, in <module>
main()
File "/home/gkressi1/opt/ldet/rate_in-context.py", line 325, in main
output_config = compute_surprisals(config=config, model_object=model_object)
File "/home/gkressi1/opt/ldet/rate_in-context.py", line 219, in compute_surprisals
output_rating = model_object.incontext(config, prompt_list)
File "/home/gkressi1/opt/ldet/src/model_objects/model_hf_causal_lm_big.py", line 85, in incontext
output = self.get_model_output(rest_prompt, use_cache=True)
File "/home/gkressi1/opt/ldet/src/model_objects/model_hf_causal_lm_big.py", line 63, in get_model_output
output = self.model(
File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/accelerate/hooks.py", line 158, in new_forward
output = old_forward(*args, **kwargs)
File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 932, in forward
outputs = self.model.decoder(
File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 639, in forward
attention_mask = self._prepare_decoder_attention_mask(
File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 546, in _prepare_decoder_attention_mask
expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
RuntimeError: The size of tensor a (93) must match the size of tensor b (1679) at non-singleton dimension 3
```
### Expected behavior
The model should create the attention mask by itself and not throw an error.
From the surface, this seems to be an easy fix:
1. Delete line [635](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L635) and [636](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L635)
2. Move line [639-642](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L639) of what is currently line [637](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L637)
3. Check TF/Flax models (?).
All the best!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21685/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21684
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21684/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21684/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21684/events
|
https://github.com/huggingface/transformers/pull/21684
| 1,590,521,449
|
PR_kwDOCUB6oc5KRwNx
| 21,684
|
Add loss for BridgeTowerForMaskedLM and BridgeTowerForImageAndTextRetrieval
|
{
"login": "abhiwand",
"id": 12353176,
"node_id": "MDQ6VXNlcjEyMzUzMTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/12353176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhiwand",
"html_url": "https://github.com/abhiwand",
"followers_url": "https://api.github.com/users/abhiwand/followers",
"following_url": "https://api.github.com/users/abhiwand/following{/other_user}",
"gists_url": "https://api.github.com/users/abhiwand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhiwand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhiwand/subscriptions",
"organizations_url": "https://api.github.com/users/abhiwand/orgs",
"repos_url": "https://api.github.com/users/abhiwand/repos",
"events_url": "https://api.github.com/users/abhiwand/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhiwand/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @amyeroberts and @younesbelkada ",
"Thank @amyeroberts, @younesbelkada, @regisss for your review and your suggestions. We have addressed your comments and have added few tests for loss computation and for forward/backward as suggested ones. \r\nCan you please help to merge this PR if possible? Thanks a lot",
"Thank @amyeroberts for approving this PR.\r\nThank @younesbelkada for your suggestion. I have resolved the failed quality tests. Can you please approve and merge this PR? c\r\nThanks a lot",
"> Let's see what @ydshieh & @sgugger will say\r\n\r\nNo problem for me, as this is used in > 100 places, and the tensor is changed to a python scalar before using it.\r\n"
] | 1,676
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds losses to BridgeTowerMaskedLM and BridgeTowerForImageAndTextRetrieval
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21684/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21684",
"html_url": "https://github.com/huggingface/transformers/pull/21684",
"diff_url": "https://github.com/huggingface/transformers/pull/21684.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21684.patch",
"merged_at": 1677604908000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21683
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21683/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21683/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21683/events
|
https://github.com/huggingface/transformers/pull/21683
| 1,590,501,485
|
PR_kwDOCUB6oc5KRsfG
| 21,683
|
Update summarization.mdx
|
{
"login": "danadascalescu00",
"id": 48893255,
"node_id": "MDQ6VXNlcjQ4ODkzMjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/48893255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danadascalescu00",
"html_url": "https://github.com/danadascalescu00",
"followers_url": "https://api.github.com/users/danadascalescu00/followers",
"following_url": "https://api.github.com/users/danadascalescu00/following{/other_user}",
"gists_url": "https://api.github.com/users/danadascalescu00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danadascalescu00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danadascalescu00/subscriptions",
"organizations_url": "https://api.github.com/users/danadascalescu00/orgs",
"repos_url": "https://api.github.com/users/danadascalescu00/repos",
"events_url": "https://api.github.com/users/danadascalescu00/events{/privacy}",
"received_events_url": "https://api.github.com/users/danadascalescu00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21683). All of your documentation changes will be reflected on that endpoint."
] | 1,676
| 1,677
| 1,677
|
NONE
| null |
Fix link in documentation
Fixes # 21596
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21683/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21683",
"html_url": "https://github.com/huggingface/transformers/pull/21683",
"diff_url": "https://github.com/huggingface/transformers/pull/21683.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21683.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21682
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21682/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21682/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21682/events
|
https://github.com/huggingface/transformers/issues/21682
| 1,590,296,341
|
I_kwDOCUB6oc5eyf8V
| 21,682
|
kros_test
|
{
"login": "brigs1",
"id": 30627711,
"node_id": "MDQ6VXNlcjMwNjI3NzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/30627711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brigs1",
"html_url": "https://github.com/brigs1",
"followers_url": "https://api.github.com/users/brigs1/followers",
"following_url": "https://api.github.com/users/brigs1/following{/other_user}",
"gists_url": "https://api.github.com/users/brigs1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brigs1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brigs1/subscriptions",
"organizations_url": "https://api.github.com/users/brigs1/orgs",
"repos_url": "https://api.github.com/users/brigs1/repos",
"events_url": "https://api.github.com/users/brigs1/events{/privacy}",
"received_events_url": "https://api.github.com/users/brigs1/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[] | 1,676
| 1,676
| null |
NONE
| null |
### Model description
1st test model trained by 100 pages
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21682/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21681
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21681/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21681/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21681/events
|
https://github.com/huggingface/transformers/issues/21681
| 1,589,997,442
|
I_kwDOCUB6oc5exW-C
| 21,681
|
Default Datatype issue with model on OPT-13B
|
{
"login": "lanking520",
"id": 11890922,
"node_id": "MDQ6VXNlcjExODkwOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/11890922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lanking520",
"html_url": "https://github.com/lanking520",
"followers_url": "https://api.github.com/users/lanking520/followers",
"following_url": "https://api.github.com/users/lanking520/following{/other_user}",
"gists_url": "https://api.github.com/users/lanking520/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lanking520/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lanking520/subscriptions",
"organizations_url": "https://api.github.com/users/lanking520/orgs",
"repos_url": "https://api.github.com/users/lanking520/repos",
"events_url": "https://api.github.com/users/lanking520/events{/privacy}",
"received_events_url": "https://api.github.com/users/lanking520/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"That is incorrect. The dtype of a model in PyTorch is always float32, regardless of the dtype of the checkpoint you saved. If you load a float16 checkpoint in a model you create (which is in float32 by default), the dtype that is kept at the end is the dtype of the model, not the dtype of the checkpoint. This is because many hardwares do not actually support other dtypes than float32 (for instance you won't be able to generate on the CPU if your model is in float16).\r\n\r\nTo load a model in float16, you have to ask explicitly with `torch_dtype=torch.float16` in your `from_pretrained` call. To load the model in the precision saved, you have to use `torch_dtype=\"auto\"`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
### System Info
Any CPU Machine with Transformer 4.26.0
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b")
print(model.dtype)
```
torch.float32 is printed out
### Expected behavior
Expect to be float16. The model saved in the HuggingFace repo is under float16 format. Convert to Float32 may mess up the behavior.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21681/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21680
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21680/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21680/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21680/events
|
https://github.com/huggingface/transformers/issues/21680
| 1,589,788,741
|
I_kwDOCUB6oc5ewkBF
| 21,680
|
model fine-tuning error
|
{
"login": "marcomameli1992",
"id": 58846715,
"node_id": "MDQ6VXNlcjU4ODQ2NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/58846715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcomameli1992",
"html_url": "https://github.com/marcomameli1992",
"followers_url": "https://api.github.com/users/marcomameli1992/followers",
"following_url": "https://api.github.com/users/marcomameli1992/following{/other_user}",
"gists_url": "https://api.github.com/users/marcomameli1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcomameli1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcomameli1992/subscriptions",
"organizations_url": "https://api.github.com/users/marcomameli1992/orgs",
"repos_url": "https://api.github.com/users/marcomameli1992/repos",
"events_url": "https://api.github.com/users/marcomameli1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcomameli1992/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I have found the solution. The problem is in the Bach dimension for the evaluation that is greater than the number of rows inside the evaluation dataset. Now I have set it to 16 and it works.\r\n\r\nBut now I have a problem with the inference because I have trained the model with tokenizer max_length settings seated to 1024 and model with the same settings but when I use the model weight for the inference I continue to receive the error about the size of the tensor:\r\n\r\nRuntimeError: The size of tensor a (726) must match the size of tensor b (512) at non-singleton dimension 1",
"Hey! Could you give more details on the exact trace that you are getting? I have no idea where it comes from so can't really help, it could be a problem with loading the checkpoints or anything. \r\nAlso can you share a simple inference reproducing script? Thanks! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
### System Info
Hi to all,
I fine-tune a model for my dataset.
But I need some help with the inference execution. I train the model with a tokenizer without truncation, but I receive the first error in inference. So I tried to retrain the model with truncation activated, as shown in the code. Still, I encountered a new error during the training that only appeared after adding truncation into the tokenizer.
Now, if I try to train the network without truncation on the tokenizer, the training not working, and I need help understanding what happens.
### Who can help?
@ArthurZucker
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Inference error:
RuntimeError: The size of tensor a (726) must match the size of tensor b (512) at non-singleton dimension 1
The function for classification is:
```
def classification_infer(data, model_path):
# device
device = find_device()
# data preprocessing
data[['notuseful', 'usefull']] = data['Descrizione'].apply(text_splitting)
data = data.loc[~data['Classe'].isin([0, 1, 2])]
# model loading
model = AutoModelForSequenceClassification.from_pretrained(model_path, num_labels=3)
tokenizer = AutoTokenizer.from_pretrained(model_path)
print("Model loaded")
print("Classifying...")
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer, device=device)
tokenizer_kwargs = {'padding': True, 'truncation': True, 'max_length': 512, 'return_tensors': 'pt'}
classifier_output = classifier(data['usefull'].tolist())
print("Classification completed")
data['Classe'] = [int(x['label']) for x in classifier_output]
return data
```
Training function, with truncation active:
```
def classification_train(data):
# metric
metric = evaluate.load('accuracy')
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
# Load tokenizer
tokenizer_kwargs = {'padding': True, 'truncation': True, 'max_length': 512, 'return_tensors': 'pt'}
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-cased", tokenizer_kwargs=tokenizer_kwargs)
# preprocessing function
def preprocess_function(data):
return tokenizer(data['text'])
# device
device = find_device()
# data preprocessing
data[['notuseful', 'usefull']] = data['Descrizione'].apply(text_splitting)
# dataset creation
dataset = pd.DataFrame()
dataset[['text', 'label']] = data.loc[data['Classe'].isin([0, 1, 2]), ['usefull', 'Classe']]
dataset['label'] = dataset['label'].astype(int)
# dataset equilibrium
dataset = dataset.groupby('label').head(100)
dataset['text'] = dataset['text'].map(lambda x: x.lower())
# dataset split
train, test = train_test_split(dataset, test_size=0.2, random_state=42)
# huggingface dataset
train = Dataset.from_pandas(train)
train = train.map(preprocess_function, batched=True)
test = Dataset.from_pandas(test)
test = test.map(preprocess_function, batched=True)
# Load model
model = AutoModelForSequenceClassification.from_pretrained("dbmdz/bert-base-italian-cased", num_labels=3)
model.to(device)
# data collator
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# training arguments
training_args = TrainingArguments(
output_dir='./model_weight', # output directory
num_train_epochs=2, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
weight_decay=0.01, # strength of weight decay
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
metric_for_best_model='eval_accuracy',
greater_is_better=True,
report_to="wandb", # enable logging to W&B
run_name="bert-base-italian-cased-fit-for-crm", # name of the W&B run (optional)
label_names=['0', '1', '2']
)
# trainer
trainer = Trainer(
model=model, # the instantiated ๐ค Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train, # training dataset
eval_dataset=test, # evaluation dataset
data_collator=data_collator, # data collator
tokenizer=tokenizer, # tokenizer
compute_metrics=compute_metrics, # the callback that computes metrics of interest
)
# train
trainer.train()
```
Error during the evaluation step with the training function with truncation active:
IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed
training function withouth truncation:
```
def classification_train(data):
# metric
metric = evaluate.load('accuracy')
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-cased")
# preprocessing function
def preprocess_function(data):
return tokenizer(data['text'], truncation=False)
# device
device = find_device()
# data preprocessing
data[['notuseful', 'usefull']] = data['Descrizione'].apply(text_splitting)
# dataset creation
dataset = pd.DataFrame()
dataset[['text', 'label']] = data.loc[data['Classe'].isin([0, 1, 2]), ['usefull', 'Classe']]
dataset['label'] = dataset['label'].astype(int)
# dataset equilibrium
dataset = dataset.groupby('label').head(100)
dataset['text'] = dataset['text'].map(lambda x: x.lower())
# dataset split
train, test = train_test_split(dataset, test_size=0.2, random_state=42)
# huggingface dataset
train = Dataset.from_pandas(train)
train = train.map(preprocess_function, batched=True)
test = Dataset.from_pandas(test)
test = test.map(preprocess_function, batched=True)
# Load model
model = AutoModelForSequenceClassification.from_pretrained("dbmdz/bert-base-italian-cased", num_labels=3)
model.to(device)
# data collator
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# training arguments
training_args = TrainingArguments(
output_dir='./model_weight', # output directory
num_train_epochs=2, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
weight_decay=0.01, # strength of weight decay
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
metric_for_best_model='eval_accuracy',
greater_is_better=True,
report_to="wandb", # enable logging to W&B
run_name="bert-base-italian-cased-fit-for-crm", # name of the W&B run (optional)
label_names=['0', '1', '2']
)
# trainer
trainer = Trainer(
model=model, # the instantiated ๐ค Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train, # training dataset
eval_dataset=test, # evaluation dataset
data_collator=data_collator, # data collator
tokenizer=tokenizer, # tokenizer
compute_metrics=compute_metrics, # the callback that computes metrics of interest
)
# train
trainer.train()
```
### Expected behavior
I expect that with the training function, I can train the network with the tokenizer arguments, and I expect the inference to work when I try to classify a text
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21680/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21679
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21679/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21679/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21679/events
|
https://github.com/huggingface/transformers/pull/21679
| 1,589,543,355
|
PR_kwDOCUB6oc5KOnct
| 21,679
|
Add ConvNeXT V2
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds [ConvNeXT V2](https://arxiv.org/pdf/2301.00808.pdf) to transformers, including a backbone. ConvNeXT V2 features minimal changes to the `ConvNextLayer` and achieves an average 1% accuracy gain over ConvNext V1. Original repo is over [here](https://github.com/facebookresearch/ConvNeXt-V2).
- [X ] Upload ImageNet 1K fine-tuned models
- [x] Upload ImageNet 22K fine-tuned models
- [x] Update model cards
- [ ] Fix TF model bugs
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ X] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21679/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21679",
"html_url": "https://github.com/huggingface/transformers/pull/21679",
"diff_url": "https://github.com/huggingface/transformers/pull/21679.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21679.patch",
"merged_at": 1678784895000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21678
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21678/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21678/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21678/events
|
https://github.com/huggingface/transformers/issues/21678
| 1,589,447,515
|
I_kwDOCUB6oc5evQtb
| 21,678
|
cached_path disappeared from the API
|
{
"login": "johann-petrak",
"id": 619106,
"node_id": "MDQ6VXNlcjYxOTEwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/619106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johann-petrak",
"html_url": "https://github.com/johann-petrak",
"followers_url": "https://api.github.com/users/johann-petrak/followers",
"following_url": "https://api.github.com/users/johann-petrak/following{/other_user}",
"gists_url": "https://api.github.com/users/johann-petrak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johann-petrak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johann-petrak/subscriptions",
"organizations_url": "https://api.github.com/users/johann-petrak/orgs",
"repos_url": "https://api.github.com/users/johann-petrak/repos",
"events_url": "https://api.github.com/users/johann-petrak/events{/privacy}",
"received_events_url": "https://api.github.com/users/johann-petrak/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The `cached_path` API was a private util for our downloads (note that we consider as private anything that is not in the main init). None of the research projects are actively maintained so they will only work with the version of Transformers corresponding to their creation.\r\n\r\nYou should use the [`huggingface_hub` library](https://github.com/huggingface/huggingface_hub) to manage downloads and cache of files on the Hub now. The closest thing we have to `cached_path` is `transformers.utils.hub.cached_file` in the current version.",
"Thanks, I will see if I can monkey-patch that tool/library accordingly. "
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
A tool using an older version of transformers uses
```
from transformers.file_utils import cached_path
```
However, this api function disappeared sometime in 2022 but I could not find any information about what it should get replaced with or similar, change log or similar.
However even in the current transformers repo, this method gets used in some example files, sometimes re-defining this method, sometimes using the same import which does not work any longer:
```
examples/research_projects/visual_bert/modeling_frcnn.py:from utils import WEIGHTS_NAME, Config, cached_path, hf_bucket_url, is_remote_url, load_checkpoint
examples/research_projects/visual_bert/modeling_frcnn.py: resolved_archive_file = cached_path(
examples/research_projects/visual_bert/utils.py: resolved_config_file = cached_path(
examples/research_projects/visual_bert/utils.py:def cached_path(
examples/research_projects/lxmert/modeling_frcnn.py:from utils import WEIGHTS_NAME, Config, cached_path, hf_bucket_url, is_remote_url, load_checkpoint
examples/research_projects/lxmert/modeling_frcnn.py: resolved_archive_file = cached_path(
examples/research_projects/lxmert/utils.py: resolved_config_file = cached_path(
examples/research_projects/lxmert/utils.py:def cached_path(
examples/research_projects/pplm/run_pplm.py:from transformers.file_utils import cached_path
examples/research_projects/pplm/run_pplm.py: resolved_archive_file = cached_path(params["url"])
examples/research_projects/pplm/run_pplm.py: filepath = cached_path(BAG_OF_WORDS_ARCHIVE_MAP[id_or_path])
examples/research_projects/seq2seq-distillation/_test_bash_script.py:from transformers.file_utils import cached_path
examples/research_projects/seq2seq-distillation/_test_bash_script.py: data_cached = cached_path(
```
### Expected behavior
The import should be possible for backwards compatibility or documentation explain what to replace it with.
Version 4.21.0 seems the last version where that function could get imported
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21678/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21678/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21677
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21677/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21677/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21677/events
|
https://github.com/huggingface/transformers/issues/21677
| 1,589,306,502
|
I_kwDOCUB6oc5euuSG
| 21,677
|
Protobuf 4 support
|
{
"login": "RobinKa",
"id": 2614101,
"node_id": "MDQ6VXNlcjI2MTQxMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2614101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobinKa",
"html_url": "https://github.com/RobinKa",
"followers_url": "https://api.github.com/users/RobinKa/followers",
"following_url": "https://api.github.com/users/RobinKa/following{/other_user}",
"gists_url": "https://api.github.com/users/RobinKa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobinKa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobinKa/subscriptions",
"organizations_url": "https://api.github.com/users/RobinKa/orgs",
"repos_url": "https://api.github.com/users/RobinKa/repos",
"events_url": "https://api.github.com/users/RobinKa/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobinKa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Last time we checked, `protobuf>=4` was blowing up sentencepiece entirely, which is a dependency we really need in Transformers. I don't know if that has been fixed since then, maybe @ydshieh could check when he has some time?",
"Running T5 tokenization tests gets a lot of failure (T5 tokenizer use `sentencepiece` ), if I use `protobuf==4.22.0`\r\n\r\nAlso, I see the following conflict when I installed latest `protobuf`. \r\n```bash\r\ntensorflow 2.11.0 requires protobuf<3.20,>=3.9.2, but you have protobuf 4.22.0 which is incompatible.\r\ntensorboardx 2.5.1 requires protobuf<=3.20.1,>=3.8.0, but you have protobuf 4.22.0 which is incompatible.\r\ntensorboard 2.11.1 requires protobuf<4,>=3.9.2, but you have protobuf 4.22.0 which is incompatible.\r\nray 2.0.0 requires protobuf<4.0.0,>=3.15.3, but you have protobuf 4.22.0 which is incompatible.\r\nonnx 1.12.0 requires protobuf<=3.20.1,>=3.12.2, but you have protobuf 4.22.0 which is incompatible.\r\n```\r\nIf `tensorflow` is installed in this case, even pytorch tests will fail, as there is\r\n```bash\r\n File \"/home/huggingface/transformers-hf-gcp/src/transformers/trainer_utils.py\", line 47, in <module>\r\n import tensorflow as tf\r\n```\r\n\r\n\r\n",
"So it looks like lots of libraries in our soft dependencies do not support protobuf 4 yet. We won't be able to offer support either until they do :-)"
] | 1,676
| 1,679
| 1,679
|
NONE
| null |
### Feature request
Currently transformers requires protobuf 3 or lower
https://github.com/huggingface/transformers/blob/a8eb4f79f946c5785f0e91b356ce328248916a05/setup.py#L141
Support for version 4 should be added.
### Motivation
Some Python packages only work with protobuf 4 so transformers is incompatible with them (for example [flytekit](https://github.com/flyteorg/flytekit) >= 1.3).
### Your contribution
-
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21677/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21676
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21676/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21676/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21676/events
|
https://github.com/huggingface/transformers/pull/21676
| 1,589,180,033
|
PR_kwDOCUB6oc5KNYsP
| 21,676
|
Generate: eta sampling numerical stability
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
MEMBER
| null |
# What does this PR do?
Minor numerical stability patch: before this change, the exception below was popping up sometimes, especially at lower numerical resolutions.
Compute entropy from logits instead -> no exception
<details>
```py
โ /home/joao/transformers/src/transformers/generation/logits_process.py:423 in __call__ โ
โ โ
โ 420 โ โ # Calculate the adaptive cutoff โ
โ 421 โ โ probabilities = scores.softmax(dim=-1) โ
โ 422 โ โ print("probs > 0:", (probabilities > 0).sum(dim=1).max()) โ
โ โฑ 423 โ โ entropy = torch.distributions.Categorical(probs=probabilities).entropy() โ
โ 424 โ โ eta = torch.min(self.epsilon, torch.sqrt(self.epsilon) * torch.exp(-entropy))[.. โ
โ 425 โ โ indices_to_remove = probabilities < eta โ
โ 426 โ
โ โ
โ /home/joao/hf/lib/python3.10/site-packages/torch/distributions/categorical.py:66 in __init__ โ
โ โ
โ 63 โ โ self._param = self.probs if probs is not None else self.logits โ
โ 64 โ โ self._num_events = self._param.size()[-1] โ
โ 65 โ โ batch_shape = self._param.size()[:-1] if self._param.ndimension() > 1 else torch โ
โ โฑ 66 โ โ super(Categorical, self).__init__(batch_shape, validate_args=validate_args) โ
โ 67 โ โ
โ 68 โ def expand(self, batch_shape, _instance=None): โ
โ 69 โ โ new = self._get_checked_instance(Categorical, _instance) โ
โ โ
โ /home/joao/hf/lib/python3.10/site-packages/torch/distributions/distribution.py:56 in __init__ โ
โ โ
โ 53 โ โ โ โ value = getattr(self, param) โ
โ 54 โ โ โ โ valid = constraint.check(value) โ
โ 55 โ โ โ โ if not valid.all(): โ
โ โฑ 56 โ โ โ โ โ raise ValueError( โ
โ 57 โ โ โ โ โ โ f"Expected parameter {param} " โ
โ 58 โ โ โ โ โ โ f"({type(value).__name__} of shape {tuple(value.shape)}) " โ
โ 59 โ โ โ โ โ โ f"of distribution {repr(self)} " โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
ValueError: Expected parameter probs (Tensor of shape (16, 32128)) of distribution Categorical(probs: torch.Size([16, 32128])) to satisfy the
constraint Simplex(), but found invalid values:
tensor([[0.0000e+00, 3.9062e-03, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
[0.0000e+00, 3.4485e-03, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
[0.0000e+00, 7.6953e-01, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
...,
[0.0000e+00, 6.3477e-03, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
[0.0000e+00, 2.8381e-03, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
[0.0000e+00, 6.3479e-06, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00]], device='cuda:0', dtype=torch.bfloat16)
```
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21676/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21676",
"html_url": "https://github.com/huggingface/transformers/pull/21676",
"diff_url": "https://github.com/huggingface/transformers/pull/21676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21676.patch",
"merged_at": 1676653778000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21675
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21675/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21675/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21675/events
|
https://github.com/huggingface/transformers/pull/21675
| 1,589,148,611
|
PR_kwDOCUB6oc5KNR53
| 21,675
|
Fix multi-gpu training error for LayoutLMv2
|
{
"login": "akkikiki",
"id": 1423362,
"node_id": "MDQ6VXNlcjE0MjMzNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1423362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akkikiki",
"html_url": "https://github.com/akkikiki",
"followers_url": "https://api.github.com/users/akkikiki/followers",
"following_url": "https://api.github.com/users/akkikiki/following{/other_user}",
"gists_url": "https://api.github.com/users/akkikiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akkikiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akkikiki/subscriptions",
"organizations_url": "https://api.github.com/users/akkikiki/orgs",
"repos_url": "https://api.github.com/users/akkikiki/repos",
"events_url": "https://api.github.com/users/akkikiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/akkikiki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @amyeroberts "
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #14110
## Issue
When training a LayoutLMv2 model with multiple GPUs using `torchrun --standalone --nnodes=1 --nproc_per_node=$NUM_GPUS run_layoutlmv2.py` (single node, multi-gpu), I encounter
```
RuntimeError: Make sure the number of processes can be divided by the number of nodes
```
## What this PR fixes
Fixes a one character typo/bug to run using multiple GPUs
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21675/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21675",
"html_url": "https://github.com/huggingface/transformers/pull/21675",
"diff_url": "https://github.com/huggingface/transformers/pull/21675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21675.patch",
"merged_at": 1676653452000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21674
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21674/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21674/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21674/events
|
https://github.com/huggingface/transformers/issues/21674
| 1,589,139,302
|
I_kwDOCUB6oc5euFdm
| 21,674
|
KerasMetricCallback expecting dictionary but receiving numpy array
|
{
"login": "leadbetterben",
"id": 66632075,
"node_id": "MDQ6VXNlcjY2NjMyMDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/66632075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leadbetterben",
"html_url": "https://github.com/leadbetterben",
"followers_url": "https://api.github.com/users/leadbetterben/followers",
"following_url": "https://api.github.com/users/leadbetterben/following{/other_user}",
"gists_url": "https://api.github.com/users/leadbetterben/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leadbetterben/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leadbetterben/subscriptions",
"organizations_url": "https://api.github.com/users/leadbetterben/orgs",
"repos_url": "https://api.github.com/users/leadbetterben/repos",
"events_url": "https://api.github.com/users/leadbetterben/events{/privacy}",
"received_events_url": "https://api.github.com/users/leadbetterben/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 ",
"Hi @leadbetterben, the problem arises because the metric callback was intended for use with `transformers` models, which generally return dicts or tuples of outputs rather than just a single array. This was an oversight on my part - I'll see if I can push a fix!",
"@leadbetterben I've created [a PR](https://github.com/huggingface/transformers/pull/21727) to resolve this issue. Can you try it out? To use the PR branch, replace the first block in your Colab notebook with this:\r\n```\r\n!pip install git+https://github.com/huggingface/transformers.git@metric_callback_fix\r\n!pip install evaluate\r\n```",
"@Rocketknight1 I've just tested it out and it seems to work fine. Thank you for that ๐ ",
"@leadbetterben The PR has now been merged. You can use it by installing `transformers` from `main` with `!pip install git+https://github.com/huggingface/transformers.git`. It'll also be included in our next release, after which you can go back to just using `pip install transformers`.\r\n\r\nThanks again for the bug report!",
"Thank you @Rocketknight1 !"
] | 1,676
| 1,677
| 1,677
|
NONE
| null |
### System Info
Running in Google Colab with `!pip install transformers evaluate` as the first cell. The results of `transformers-cli env` are:
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
[View the Google Colab here](https://colab.research.google.com/drive/1Pgc1jkZZbMmOF4O8Tz4Nz81V5FqkqCr-?usp=sharing)
Code:
```
!pip install transformers evaluate
import tensorflow as tf
import evaluate
from numpy import argmax as np_argmax
from transformers import create_optimizer
from transformers.keras_callbacks import KerasMetricCallback
tf.debugging.disable_traceback_filtering()
train_texts = ["This is class 0", "I am a class 1 sentence", "Class 2", "Also class 2"]
train_labels = [0, 1, 2, 2]
test_texts = ["A class 1 example", "Testing class 0"]
test_labels = [1, 0]
num_classes = 3
batch_size = 16
def create_dataset(texts, labels):
dataset = tf.data.Dataset.from_tensor_slices((texts, labels))
return dataset.shuffle(10000).batch(batch_size).prefetch(tf.data.AUTOTUNE)
train_dataset = create_dataset(train_texts, train_labels)
test_dataset = create_dataset(test_texts, test_labels)
encoder = tf.keras.layers.TextVectorization()
encoder.adapt(train_dataset.map(lambda text, label: text))
model = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(num_classes, activation='sigmoid')
])
num_epochs = 5
batches_per_epoch = len(train_texts) // batch_size
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(),
optimizer=optimizer)
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np_argmax(predictions, axis=1)
return accuracy.compute(predictions=predictions, references=labels)
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=test_dataset)
callbacks = [metric_callback]
model.fit(x=train_dataset, validation_data=test_dataset, epochs=num_epochs, callbacks=callbacks)
```
Full error stack trace:
```
Epoch 1/5
1/1 [==============================] - ETA: 0s - loss: 1.0996
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-17-e77f61379ec7>](https://localhost:8080/#) in <module>
----> 1 model.fit(x=train_dataset, validation_data=test_dataset, epochs=num_epochs, callbacks=callbacks)
3 frames
[/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)
59 def error_handler(*args, **kwargs):
60 if not tf.debugging.is_traceback_filtering_enabled():
---> 61 return fn(*args, **kwargs)
62
63 filtered_tb = None
[/usr/local/lib/python3.8/dist-packages/keras/engine/training.py](https://localhost:8080/#) in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1710 epoch_logs.update(val_logs)
1711
-> 1712 callbacks.on_epoch_end(epoch, epoch_logs)
1713 training_logs = epoch_logs
1714 if self.stop_training:
[/usr/local/lib/python3.8/dist-packages/keras/callbacks.py](https://localhost:8080/#) in on_epoch_end(self, epoch, logs)
452 logs = self._process_logs(logs)
453 for callback in self.callbacks:
--> 454 callback.on_epoch_end(epoch, logs)
455
456 def on_train_batch_begin(self, batch, logs=None):
[/usr/local/lib/python3.8/dist-packages/transformers/keras_callbacks.py](https://localhost:8080/#) in on_epoch_end(self, epoch, logs)
236 predictions = {key: predictions[key] for key in self.output_cols}
237 else:
--> 238 predictions = {key: val for key, val in predictions.items() if key not in ignore_keys + ["loss"]}
239 prediction_list.append(predictions)
240 if not self.use_keras_label:
AttributeError: 'numpy.ndarray' object has no attribute 'items'
```
### Expected behavior
The code is adapted from a [HuggingFace text classification tutorial](https://huggingface.co/docs/transformers/tasks/sequence_classification#text-classification) and a [TensorFlow text classification with an RNN tutorial](https://www.tensorflow.org/text/tutorials/text_classification_rnn). The optimizer, metrics and callbacks are from the HuggingFace tutorial. The encoder and model are from the TensorFlow tutorial.
The error given is `AttributeError: 'numpy.ndarray' object has no attribute 'items'` and occurs from [line 237 of keras_callbacks.py](https://github.com/huggingface/transformers/blob/main/src/transformers/keras_callbacks.py#L237). The code around this seems to expect to be dealing with a dictionary or a subclass of a dictionary as a result of the `predict_on_batch` function called on the model. However, [the documentation for the TensorFlow Model class](https://www.tensorflow.org/api_docs/python/tf/keras/Model#predict_on_batch), [the documentation for the TensorFlow Sequential Class (which subclasses Model) ](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential#predict_on_batch) and [the source code for the `predict_on_batch` method](https://github.com/keras-team/keras/blob/v2.11.0/keras/engine/training.py#L2547-L2572) show that it returns a numpy array.
I would expect this code not to error, and the callback to successfully call the metrics function with the expected predictions.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21674/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21673
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21673/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21673/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21673/events
|
https://github.com/huggingface/transformers/pull/21673
| 1,588,648,236
|
PR_kwDOCUB6oc5KLmSf
| 21,673
|
Added Type Hints for modeling_tf_encoder_decoder.py
|
{
"login": "Batese2001",
"id": 69521504,
"node_id": "MDQ6VXNlcjY5NTIxNTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/69521504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Batese2001",
"html_url": "https://github.com/Batese2001",
"followers_url": "https://api.github.com/users/Batese2001/followers",
"following_url": "https://api.github.com/users/Batese2001/following{/other_user}",
"gists_url": "https://api.github.com/users/Batese2001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Batese2001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Batese2001/subscriptions",
"organizations_url": "https://api.github.com/users/Batese2001/orgs",
"repos_url": "https://api.github.com/users/Batese2001/repos",
"events_url": "https://api.github.com/users/Batese2001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Batese2001/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@Rocketknight1 I think this is ready, though I am getting an odd error saying \"module 'tensorflow' has no attribute 'tensor'\" which I am not sure how to resolve. Thanks and let me know if there is anything I need to fix! ",
"@Batese2001 The issue was that one of your hints was `tf.tensor` instead of `tf.Tensor` - it was case-sensitive! I just changed it there, let's see if that fixes the tests. Note that because I made the change in the PR branch, you should `pull` those changes before committing/pushing any further updates, or you might get a branch conflict.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Tests look good - that Torch error is totally unrelated to this PR. Are you happy for me to merge the PR at this point?",
"Wonderful! If you think it is ready, then I am all for merging! Thank you for your help\r\n",
"Merged, and thanks for your help!"
] | 1,676
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
This pull request adds type hints for modeling_tf_encoder_decoder.py as outlined in Issue #16059 while not being on my main branch
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21673/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21673",
"html_url": "https://github.com/huggingface/transformers/pull/21673",
"diff_url": "https://github.com/huggingface/transformers/pull/21673.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21673.patch",
"merged_at": 1677161306000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21672
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21672/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21672/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21672/events
|
https://github.com/huggingface/transformers/issues/21672
| 1,588,582,469
|
I_kwDOCUB6oc5er9hF
| 21,672
|
Trainer state vitalization in TrOCR checkpoint
|
{
"login": "Mohammed20201991",
"id": 59222637,
"node_id": "MDQ6VXNlcjU5MjIyNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/59222637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mohammed20201991",
"html_url": "https://github.com/Mohammed20201991",
"followers_url": "https://api.github.com/users/Mohammed20201991/followers",
"following_url": "https://api.github.com/users/Mohammed20201991/following{/other_user}",
"gists_url": "https://api.github.com/users/Mohammed20201991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mohammed20201991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mohammed20201991/subscriptions",
"organizations_url": "https://api.github.com/users/Mohammed20201991/orgs",
"repos_url": "https://api.github.com/users/Mohammed20201991/repos",
"events_url": "https://api.github.com/users/Mohammed20201991/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mohammed20201991/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You should ask questions like this on the [forums](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,679
| 1,679
|
NONE
| null |
Hello, everyone, I am trying to make one dictionary in py to collect all necessary data in `traner_state.json ` in the TrOCR model in the trainer class (I am not using collab so I need to build my own dict and visualize it cer,wer, steps, epochs, ...)
her the py code I wrote
`import pandas as pd df = pd.read_json('/checkpoint-2000/trainer_state.json')
# print(df.head()) # print(df.to_string()) column_names = list(df.columns.values) print(column_names)
# log_history = column_names[7]
# print(log_history[0]) import json
# Opening JSON file with open('/checkpoint-2000/trainer_state.json') as json_file: data = json.load(json_file)
# print("Type:", type(data)) # print('show log_history', data['log_history']) log_history =data['log_history']
# print('\nlog_history\n',log_history[0]['epoch']) odd_dict , even_dict= {},{} log_history_dict = {} for count, value in enumerate(log_history): log_history_dict[count] = value print('\nlog_history_dict \n', log_history_dict) for k ,v in log_history_dict.items(): if k % 2 == 0: even_dict[k] = v else: odd_dict[k] = v
# print('\n even_dict',even_dict , '\nodd_dict' , odd_dict)
# log_history_clean = {} # for v in odd_dict.values():
# log_history_clean ['epoch'] = v['epoch'] # log_history_clean['learning_rate']= v['learning_rate']
# log_history_clean['loss']= v['loss']
# log_history_clean['step']= v['step']
# # for key ,value in v.items():
# # log_history_clean[key] = value
# # print(key,value) # print(log_history_clean)
# # --------- # {
# "best_metric": null,
# "best_model_checkpoint": null,
# "epoch": 1.4265335235378032,
# "global_step": 2000,
# "is_hyper_param_search": false,
# "is_local_process_zero": true,
# "is_world_process_zero": true,
# "log_history":[
# {
# "epoch": 0.36,
# "learning_rate": 3.94339514978602e-05,
# "loss": 0.5516,
# "step": 500
# },
# {
# "epoch": 0.36,
# "eval_cer": 4.407666576772222,
# "eval_loss": 0.25193867087364197,
# "eval_runtime": 1338.5651,
# "eval_samples_per_second": 13.973,
# "eval_steps_per_second": 0.583,
# "eval_wer": 17.79562559983836,
# "step": 500
# },
# ]
# }`
the expected new JSON file I want to make it like this format : `
# Goal : { 'index' : 0
# 'epoch': 0.36 ,
# 'learning_rate': 3.94339514978602e-05,
# 'loss': 0.5516,
# 'step': 500 ,
# 'epoch': 0.36
# 'eval_cer': 4.407666576772222,
# 'eval_loss': 0.25193867087364,
# 'eval_runtime': 1338.5651,
# 'eval_samples_per_second': 13.973,
# 'eval_steps_per_second': 0.583,
# 'eval_wer': 17.79562559983836,
# 'step': 500,
# # # # # # # # # # # # }`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21672/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21671
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21671/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21671/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21671/events
|
https://github.com/huggingface/transformers/pull/21671
| 1,588,521,293
|
PR_kwDOCUB6oc5KLLUX
| 21,671
|
Allows to use `decoder_inputs_embeds` for `model.generate`
|
{
"login": "Andrechang",
"id": 9553458,
"node_id": "MDQ6VXNlcjk1NTM0NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9553458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Andrechang",
"html_url": "https://github.com/Andrechang",
"followers_url": "https://api.github.com/users/Andrechang/followers",
"following_url": "https://api.github.com/users/Andrechang/following{/other_user}",
"gists_url": "https://api.github.com/users/Andrechang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Andrechang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andrechang/subscriptions",
"organizations_url": "https://api.github.com/users/Andrechang/orgs",
"repos_url": "https://api.github.com/users/Andrechang/repos",
"events_url": "https://api.github.com/users/Andrechang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Andrechang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @gante ",
"And how about the EncoderDecoderModel like T5?\r\n\r\n\r\nI tried to replace the `prepare_inputs_for_generation` method only guided by [#6535](https://github.com/huggingface/transformers/issues/6535), but it does not work .... \r\n\r\n\r\n```\r\n\r\n\r\nclass CustomT5ForConditionalGeneration(T5ForConditionalGeneration):\r\n \r\n def prepare_inputs_for_generation(self,\r\n input_ids,\r\n past_key_values=None,\r\n attention_mask=None,\r\n head_mask=None,\r\n decoder_head_mask=None,\r\n cross_attn_head_mask=None,\r\n use_cache=None,\r\n encoder_outputs=None,\r\n **kwargs):\r\n res = super().prepare_inputs_for_generation(input_ids,\r\n past_key_values,\r\n attention_mask,\r\n head_mask,\r\n decoder_head_mask,\r\n cross_attn_head_mask,\r\n use_cache,\r\n encoder_outputs,\r\n **kwargs)\r\n # maybe another solution :https://github.com/huggingface/transformers/pull/21671\r\n \r\n # add decoder embeddings and mask\r\n if \"decoder_inputs_embeds\" in kwargs.keys():\r\n res[\"decoder_inputs_embeds\"] = kwargs[\"decoder_inputs_embeds\"]\r\n if \"decoder_attention_mask\" in kwargs.keys():\r\n res[\"decoder_attention_mask\"] = kwargs[\"decoder_attention_mask\"]\r\n \r\n # if `inputs_embeds` are passed, we only want to use them in the 1st generation step\r\n if past_key_values is None:\r\n del res[\"decoder_input_ids\"]\r\n else:\r\n # only last token for inputs_ids if past is defined in kwargs\r\n res['decoder_input_ids'] = res['decoder_input_ids'][:, -1].unsqueeze(-1)\r\n del res[\"decoder_inputs_embeds\"]\r\n \r\n return res\r\n```\r\n\r\n\r\n\r\n",
"Hey @YiandLi ๐ \r\n\r\nMy suggestion would be to open a separate issue for the support of a `decoder_input_embeds` input, like #6535, so the issue becomes clear and visible to everyone. Like in #6535, I'd be happy to a) share a temporary solution b) push a permanent solution if the issue acquires sufficient traction.\r\n\r\nNormally, I would not provide support for custom tasks, as my bandwidth is very limited, but according to this closed PR you are not the first person asking the question :)"
] | 1,676
| 1,687
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Allows to use `decoder_inputs_embeds` for `model.generate` in VisionEncoderDecoderModel
## Who can review?
Vision Model
@amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21671/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21671",
"html_url": "https://github.com/huggingface/transformers/pull/21671",
"diff_url": "https://github.com/huggingface/transformers/pull/21671.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21671.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21670
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21670/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21670/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21670/events
|
https://github.com/huggingface/transformers/pull/21670
| 1,588,365,935
|
PR_kwDOCUB6oc5KKpux
| 21,670
|
[`CLAP`] Fix few broken things
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"There are also 2 more lines that got erased, shared that with you on slack ๐ ",
"_The documentation is not available anymore as the PR was closed or merged._",
"Local doctests and tests pass: \r\n```\r\n============================================================ 119 passed, 39 skipped, 38 warnings in 75.88s (0:01:15) ============================================================\r\n```"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the forward pass that was broken in the `main` branch of `transformers` for `ClapModel`. To reproduce:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
dataset = load_dataset('ashraq/esc50')
input_text = ["Sound of a dog", "Sound of vaccum cleaner"]
audio_sample = dataset["train"]["audio"][-1]['array']
model_id = "ybelkada/clap-htsat-unfused"
processor = ClapProcessor.from_pretrained(model_id)
model = ClapModel.from_pretrained(model_id)
input_text = processor.tokenizer(input_text, return_tensors="pt", padding=True)
input_sample = processor.feature_extractor(audio_sample, return_tensors="pt")
out = model(input_ids=input_text.input_ids, attention_mask=input_text.attention_mask, input_features=input_sample.input_features, is_longer=input_sample.is_longer)
print(out.logits_per_audio.softmax(dim=-1)[0])
```
This PR fixes also few other nits that were missing during the bad rebase
This PR also fixes the doctest and the failing slow tests
cc @ArthurZucker @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21670/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21670",
"html_url": "https://github.com/huggingface/transformers/pull/21670",
"diff_url": "https://github.com/huggingface/transformers/pull/21670.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21670.patch",
"merged_at": 1676629935000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21669
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21669/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21669/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21669/events
|
https://github.com/huggingface/transformers/issues/21669
| 1,588,326,405
|
I_kwDOCUB6oc5eq_AF
| 21,669
|
TypeError: 'NoneType' object is not callable. While using run_clm.py, while trying to load dataset.
|
{
"login": "Norfaisbest",
"id": 81870363,
"node_id": "MDQ6VXNlcjgxODcwMzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/81870363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Norfaisbest",
"html_url": "https://github.com/Norfaisbest",
"followers_url": "https://api.github.com/users/Norfaisbest/followers",
"following_url": "https://api.github.com/users/Norfaisbest/following{/other_user}",
"gists_url": "https://api.github.com/users/Norfaisbest/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Norfaisbest/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Norfaisbest/subscriptions",
"organizations_url": "https://api.github.com/users/Norfaisbest/orgs",
"repos_url": "https://api.github.com/users/Norfaisbest/repos",
"events_url": "https://api.github.com/users/Norfaisbest/events{/privacy}",
"received_events_url": "https://api.github.com/users/Norfaisbest/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @Norfaisbest, thanks for raising this issue.\r\n\r\nIt seems this issue is arising from the dataset `test` being loaded with `load_dataset` and isn't a `transformers` issue. I would suggest trying to debug running just `load_dataset('dataset_name')` outside of the script when trying to debug. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,682
| 1,682
|
NONE
| null |
### System Info
```
- `transformers` version: 4.26.1
- Platform: Linux-6.0.0-6-amd64-x86_64-with-glibc2.36
- Python version: 3.9.12
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
Trace:
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โrun_clm.py:621 in <module> โ
โ โ
โ 618 โ
โ 619 โ
โ 620 if __name__ == "__main__": โ
โ โฑ 621 โ main() โ
โ 622 โ
โ โ
โrun_clm.py:286 in main โ
โ โ
โ 283 โ # download the dataset. โ
โ 284 โ if data_args.dataset_name is not None: โ
โ 285 โ โ # Downloading and loading a dataset from the hub. โ
โ โฑ 286 โ โ raw_datasets = load_dataset( โ
โ 287 โ โ โ data_args.dataset_name, โ
โ 288 โ โ โ data_args.dataset_config_name, โ
โ 289 โ โ โ cache_dir=model_args.cache_dir, โ
โ โ
โ anaconda3/lib/python3.9/site-packages/datasets/load.py:1735 in load_dataset โ
โ โ
โ 1732 โ ignore_verifications = ignore_verifications or save_infos โ
โ 1733 โ โ
โ 1734 โ # Create a dataset builder โ
โ โฑ 1735 โ builder_instance = load_dataset_builder( โ
โ 1736 โ โ path=path, โ
โ 1737 โ โ name=name, โ
โ 1738 โ โ data_dir=data_dir, โ
โ โ
โ anaconda3/lib/python3.9/site-packages/datasets/load.py:1519 in load_dataset_builder โ
โ โ
โ 1516 โ โ raise ValueError(error_msg) โ
โ 1517 โ โ
โ 1518 โ # Instantiate the dataset builder โ
โ โฑ 1519 โ builder_instance: DatasetBuilder = builder_cls( โ
โ 1520 โ โ cache_dir=cache_dir, โ
โ 1521 โ โ config_name=config_name, โ
โ 1522 โ โ data_dir=data_dir, โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. run `run_clm.py` with the following parameters.
```
--model_type gpt2 \
--output_dir ./models \
--do_train \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--save_total_limit 2 \
--save_steps 2000 \
--per_gpu_train_batch_size 16 \
--seed 42 \
--validation_file test.txt \
--do_eval \
--train_file text.txt \
--dataset_name test \
--tokenizer tokeniser2.py`
```
### Expected behavior
The program does not crash.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21669/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21668
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21668/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21668/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21668/events
|
https://github.com/huggingface/transformers/issues/21668
| 1,588,247,705
|
I_kwDOCUB6oc5eqryZ
| 21,668
|
Add SeaFormer model
|
{
"login": "inderpreetsingh01",
"id": 54892545,
"node_id": "MDQ6VXNlcjU0ODkyNTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/54892545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inderpreetsingh01",
"html_url": "https://github.com/inderpreetsingh01",
"followers_url": "https://api.github.com/users/inderpreetsingh01/followers",
"following_url": "https://api.github.com/users/inderpreetsingh01/following{/other_user}",
"gists_url": "https://api.github.com/users/inderpreetsingh01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/inderpreetsingh01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/inderpreetsingh01/subscriptions",
"organizations_url": "https://api.github.com/users/inderpreetsingh01/orgs",
"repos_url": "https://api.github.com/users/inderpreetsingh01/repos",
"events_url": "https://api.github.com/users/inderpreetsingh01/events{/privacy}",
"received_events_url": "https://api.github.com/users/inderpreetsingh01/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi. I would like to work on this.",
"Hi @inderpreetsingh01 thanks for opening the issue, SeaFormer definitely seems like a good addition to the library! \r\n\r\nAre you planning to work on this model? If not, @strankid could start working on it or you two could collaborate on a PR. In either case, you could take a look at our [model addition guidelines](https://huggingface.co/docs/transformers/add_new_model), as well as the transformers code of other segmentation models such as [SegFormer](https://github.com/huggingface/transformers/tree/main/src/transformers/models/segformer), [MaskFormer](https://github.com/huggingface/transformers/tree/main/src/transformers/models/maskformer), [Mask2Former](https://github.com/huggingface/transformers/tree/main/src/transformers/models/mask2former) and [OneFormer](https://github.com/huggingface/transformers/tree/main/src/transformers/models/oneformer).",
"hello, fancy to make [SETR](https://github.com/fudan-zvg/SETR) on board๏ผ ",
"@alaradirik should I begin with a WIP PR? ",
"Hi @alaradirik thanks for sharing the resources. I will be working on adding this model. @strankid if you want we can collaborate on this. @lzrobots i saw both SeaFormer and SETR use mmseg, we can look into it.",
"@inderpreetsingh01 i'm down to collaborate! ",
"Great :) @strankid @inderpreetsingh01 you can ping me if you have questions about the library or need help with anything (e.g. model conversion).\r\n\r\nIt'd be great if you could open a WIP PR, as it'd make it easier to ask / answer questions and do a preliminary review later down the road.",
"thanks @alaradirik will do it. @strankid can you share your mail id so that we can connect on slack?",
"@inderpreetsingh01 my email is apoorv96@gmail.com. Should I create the PR or would you like to? ",
"@strankid sure you can create the pr.",
"@inderpreetsingh01 Saw you created the wip pr. Since you have my email, just contact me and let me know how you want to split the work. "
] | 1,676
| 1,677
| null |
NONE
| null |
### Model description
The computational cost and memory requirement render many computer vision models unsuitable on the mobile device, especially for the high-resolution per-pixel semantic segmentation task. SeaFormer (Squeeze-enhanced Axial Transformer) designed a generic attention block characterized by the formulation of squeeze Axial and detail enhancement. Coupled with a light segmentation head, they achieve the best trade-off between segmentation accuracy and latency on the ARM-based mobile devices on the ADE20K and Cityscapes datasets. They beat both the mobile-friendly rivals and Transformer-based counterparts with better performance and lower latency.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper: https://arxiv.org/pdf/2301.13156.pdf
Code and weights: https://github.com/fudan-zvg/SeaFormer
Authors: @wwqq @lzrobots @speedinghzl
cc: @NielsRogge @alaradirik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21668/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21667
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21667/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21667/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21667/events
|
https://github.com/huggingface/transformers/issues/21667
| 1,588,203,727
|
I_kwDOCUB6oc5eqhDP
| 21,667
|
T5 Mutli-GPU FSDP evaluation loop raises RuntimeError when predict_with_generate is True
|
{
"login": "eyalmazuz",
"id": 34383384,
"node_id": "MDQ6VXNlcjM0MzgzMzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/34383384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyalmazuz",
"html_url": "https://github.com/eyalmazuz",
"followers_url": "https://api.github.com/users/eyalmazuz/followers",
"following_url": "https://api.github.com/users/eyalmazuz/following{/other_user}",
"gists_url": "https://api.github.com/users/eyalmazuz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyalmazuz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyalmazuz/subscriptions",
"organizations_url": "https://api.github.com/users/eyalmazuz/orgs",
"repos_url": "https://api.github.com/users/eyalmazuz/repos",
"events_url": "https://api.github.com/users/eyalmazuz/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyalmazuz/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 4101623725,
"node_id": "LA_kwDOCUB6oc70ec-t",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch%20FSDP",
"name": "PyTorch FSDP",
"color": "B60205",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hey @eyalmazuz ๐ \r\n\r\nLooking at the exception, it does not look like a generate error, but rather a pytorch/trainer-related issue (it fails in the embedding layer). I'm not very knowledgeable there, so I'm tagging @sgugger for a comment.\r\n\r\nBTW, without a short reproducible script, our ability to help is limited :)",
"> Hey @eyalmazuz wave\r\n> \r\n> Looking at the exception, it does not look like a generate error, but rather a pytorch/trainer-related issue (it fails in the embedding layer). I'm not very knowledgeable there, so I'm tagging @sgugger for a comment.\r\n> \r\n> BTW, without a short reproducible script, our ability to help is limited :)\r\n\r\nHi @gante \r\n\r\nI created a repository with all the code here:\r\nhttps://github.com/eyalmazuz/T5-Translation\r\n\r\nI think I uploaded everything needed\r\n\r\nIt is possible to use the validation file for training as well, the problem still persists\r\n\r\nand as I mentioned at the end of the issue, it only happens when ``predict_with_generate=True``, so I assumed it's an issue with it, in the way it is handled when generating outputs as part of the evaluation vs predicting",
"cc @pacman100 ",
"Hello, with FSDP it isn't supported as mentioned here: https://huggingface.co/docs/accelerate/usage_guides/fsdp#a-few-caveats-to-be-aware-of\r\n\r\n```\r\nThis feature is incompatible with --predict_with_generate in the run_translation.py script of ๐ค Transformers library.\r\n```",
"@eyalmazuz, the reason is the `generate` of transformers bypasses the FSDP module's `forward` and directly calls internal model's encoder which isn't wrapped in FSDP unit, because of this the parameters required for the forward pass aren't gathered leading to the error you notice above. \r\n\r\nRelated PRs to make `generate` work with FSDP, some hack is required:\r\nhttps://github.com/pytorch/pytorch/issues/82461",
"Even if one manually wraps encoder and decoder in separate FSDP units, it will still produce errors because shared parameters should be part of same FSDP unit which would now be broken because shared embedding layers of encoder and decoder will be in separate FSDP units: https://github.com/pytorch/pytorch/issues/79605",
"> Related PRs to make generate work with FSDP, some hack is required:\r\nhttps://github.com/pytorch/pytorch/issues/82461\r\n\r\nA hacky way proposed in above issue with PyTorch team is currently the only way to get `generate` to work with FSDP.",
"@pacman100 thank you for your reply\r\nIf I understood [https://github.com/pytorch/pytorch/issues/82461](https://github.com/pytorch/pytorch/issues/82461), then the issue occurs because FSDP wraps the entire T5 but not sub modules so when calling forward on T5 it works but calling directly on T5.encoder will not work since it's specifically not wrapped in FSDP.\r\n\r\nBut isn't adding ``auto_wrap`` to the FSDP params suppose to recursively wrap all layers in FSDP and thus solve the issue?\r\nas the documentation says\r\n```\r\nTo automatically recursively wrap layers with FSDP using default_auto_wrap_policy,\r\nadd --fsdp \"full_shard auto_wrap\" or --fsdp \"shard_grad_op auto_wrap\" to the command line arguments.\r\n```\r\nOr is it only wrapping T5Block in this case?\r\n\r\nI changed the seq2seq_trainer file and added a small dummy forward pass before ``model.generate`` as mentioned in [https://github.com/huggingface/accelerate/issues/570](https://github.com/huggingface/accelerate/issues/570)\r\n\r\n```\r\nmodel_inputs = self.tokenizer(\r\n \"ูู ููู
\", text_target=\"ืืืื ืฉื ื, ืืืขื ืื ืืืืช ืืกืคืจ\", max_length=10, return_tensors='pt', truncation=True\r\n)\r\n\r\noutputs = self.model(**model_inputs)\r\ngen_kwargs[\"synced_gpus\"] = True\r\n\r\ngenerated_tokens = self.model.generate(\r\n generation_inputs,\r\n **gen_kwargs,\r\n)\r\n```\r\n\r\nis ``synced_gpus=True`` needed? \r\nit works without it, but it'll keep it anyways",
"@eyalmazuz, transformer auto wrap only wraps T5Block modules in nested FSDP units\n\nthe encoder, decoder, lm_head and shared are part of the global FSDP unit and this is important too because embedding layers which are shared need to be part of the same FSDP unit, in this case the global one.\n\nIf one puts encoder and decoder modules in different nested FSDP units, shared embedding weights are no longer in same FSDP units leading to another error as mentioned in above comments ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
### System Info
Transformers version 4.27.0-dev
Python version 3.8.12
### Who can help?
@gante
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
after #21604 I tried optimizing my code a bit more and I read about deepspeed and FSDP and decided to try FSDP since it seemed simpler.
here's a link to the new code:
https://pastebin.com/n9Su4AiL
torchrun train_model.py --dataset_path ./data/HF_HE_AR_Dataset.json --tokenizer_path ./T5Tokenizer/ --max_length=128 --batch_size=4 --logging_steps 10 --save_steps 1000 --model google/t5-v1_1-large --validation_path ./data/dev.json --test_path ./data/test.json --weight_decay 0.0
when the code reaches the number of logging steps I defined (here is 10), it crashes when the final error is:
```
RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy al
location, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
```
full traceback can be found here:
https://pastebin.com/ucZ021EQ
it happens with and without ``fsdp_transformer_layer_cls_to_wrap`` and with any fsdp option with and without ``auto_wrap`` and both ``shard_grap_op`` and ``full_shard`` and with and without ``fp16=True``
when ``predict_with_generate=True``
if ``predict_with_generate=False`` it works fine
### Expected behavior
running fsdp with predict_with_generate successfully
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21667/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21666
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21666/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21666/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21666/events
|
https://github.com/huggingface/transformers/issues/21666
| 1,587,940,277
|
I_kwDOCUB6oc5epgu1
| 21,666
|
NeoX underperforming on A100
|
{
"login": "mrseeker",
"id": 1099127,
"node_id": "MDQ6VXNlcjEwOTkxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1099127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrseeker",
"html_url": "https://github.com/mrseeker",
"followers_url": "https://api.github.com/users/mrseeker/followers",
"following_url": "https://api.github.com/users/mrseeker/following{/other_user}",
"gists_url": "https://api.github.com/users/mrseeker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrseeker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrseeker/subscriptions",
"organizations_url": "https://api.github.com/users/mrseeker/orgs",
"repos_url": "https://api.github.com/users/mrseeker/repos",
"events_url": "https://api.github.com/users/mrseeker/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrseeker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Are you using a 40 GB or 80 GB A100? A 40 GB A100 has slightly less VRAM than an A6000 (48 GB), and itโs possible that youโre tripping a failsafe such as CPU-offload to avoid an OOM error.\r\n\r\n12 billion params * 3 Bytes per parameter = 36 GB of VRAM. My general rule of thumb is to add 20% overhead space for a full context length of 2048, which would push the model over the 40 GB limit.\r\n\r\nOne way to test this theory would be to try running it with LLM.int8. Another would be to carefully monitor GPU and CPU usage during inference.",
"This might be the issue indeed. It's automatically offloading to CPU but not generating any overflow warnings, making it difficult to pinpoint where the issue would lie. Going for the 80Gb did indeed solve the issue."
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-50-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
When running EleutherAI/pythia-12b-deduped on A100, generation slows down from 15tok/s to 3tok/s when sending 1024 tokens as input. This behaviour does not occur using A6000 with the same setup (14.04 tok/s, 1024 tok input). Ruled out "machine" issue as this happens on multiple servers on multiple locations (runpod, vast.ai, google...)
Does any of you have an idea what the root cause of this issue could be? Running pure python with no accelerate or anything that could "interfere" with the speed of HF. I am running a standard KoboldAI application.
Tagging: @ArthurZucker @younesbelkada @StellaAthena
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Clone https://github.com/henk717/KoboldAI + install dependencies.
2. Start the KoboldAI using model EleutherAI/pythia-12b-deduped
3. Let it generate 100 tokens until it reaches 1024 tokens.
On A100, this would slowly start to slow down token generation. On A6000, this has no effect.
### Expected behavior
Expected behaviour would be that the token generation would be same, regardless of amount inserted.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21666/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21665
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21665/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21665/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21665/events
|
https://github.com/huggingface/transformers/pull/21665
| 1,587,684,818
|
PR_kwDOCUB6oc5KIWJe
| 21,665
|
Fix typos in contrastive-image-text example README
|
{
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes typos in https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/README.md.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21665/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21665",
"html_url": "https://github.com/huggingface/transformers/pull/21665",
"diff_url": "https://github.com/huggingface/transformers/pull/21665.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21665.patch",
"merged_at": 1676556625000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21664
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21664/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21664/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21664/events
|
https://github.com/huggingface/transformers/issues/21664
| 1,587,632,798
|
I_kwDOCUB6oc5eoVqe
| 21,664
|
AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'
|
{
"login": "k3ybladewielder",
"id": 50303964,
"node_id": "MDQ6VXNlcjUwMzAzOTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/50303964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/k3ybladewielder",
"html_url": "https://github.com/k3ybladewielder",
"followers_url": "https://api.github.com/users/k3ybladewielder/followers",
"following_url": "https://api.github.com/users/k3ybladewielder/following{/other_user}",
"gists_url": "https://api.github.com/users/k3ybladewielder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/k3ybladewielder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/k3ybladewielder/subscriptions",
"organizations_url": "https://api.github.com/users/k3ybladewielder/orgs",
"repos_url": "https://api.github.com/users/k3ybladewielder/repos",
"events_url": "https://api.github.com/users/k3ybladewielder/events{/privacy}",
"received_events_url": "https://api.github.com/users/k3ybladewielder/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @k3ybladewielder \r\nIt seems that this is related to your environment, can you create a fresh environment , install `transformers` `pip install transformers` and run the script again? or alternatively uninstall the package that is causing the issue? \r\nAlso please share with us the full trace of the error in your issue and not in the title as it is hard to understand what is going on with very few details! Thanks!",
"Sorry for forgot to share the full trace of error.\r\nI did it and now, the error is:\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<command-3729796333060543> in <module>\r\n 5 #model_path = 'distilbert-base-uncased-finetuned-sst-2-english' #para usar no target_lang\r\n 6 # tokenizer = AutoTokenizer.from_pretrained(model_path)\r\n----> 7 sentiment_task = pipeline(\"sentiment-analysis\", model=model_path, tokenizer=model_path)\r\n 8 sentiment_task(\"T'estimo!\")\r\n\r\n/databricks/python/lib/python3.7/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, model_kwargs, **kwargs)\r\n 500 \r\n 501 tokenizer = AutoTokenizer.from_pretrained(\r\n--> 502 tokenizer_identifier, revision=revision, use_fast=use_fast, _from_pipeline=task, **tokenizer_kwargs\r\n 503 )\r\n 504 \r\n\r\n/databricks/python/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)\r\n 496 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]\r\n 497 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None):\r\n--> 498 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n 499 else:\r\n 500 if tokenizer_class_py is not None:\r\n\r\n/databricks/python/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)\r\n 1747 *init_inputs,\r\n 1748 use_auth_token=use_auth_token,\r\n-> 1749 **kwargs,\r\n 1750 )\r\n 1751 \r\n\r\n/databricks/python/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, *init_inputs, **kwargs)\r\n 1869 # Instantiate tokenizer.\r\n 1870 try:\r\n-> 1871 tokenizer = cls(*init_inputs, **init_kwargs)\r\n 1872 except OSError:\r\n 1873 raise OSError(\r\n\r\n/databricks/python/lib/python3.7/site-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py in __init__(self, vocab_file, tokenizer_file, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, **kwargs)\r\n 142 pad_token=pad_token,\r\n 143 mask_token=mask_token,\r\n--> 144 **kwargs,\r\n 145 )\r\n 146 \r\n\r\n/databricks/python/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in __init__(self, *args, **kwargs)\r\n 116 else:\r\n 117 raise ValueError(\r\n--> 118 \"Couldn't instantiate the backend tokenizer from one of: \\n\"\r\n 119 \"(1) a `tokenizers` library serialization file, \\n\"\r\n 120 \"(2) a slow tokenizer instance to convert or \\n\"\r\n\r\nValueError: Couldn't instantiate the backend tokenizer from one of: \r\n(1) a `tokenizers` library serialization file, \r\n(2) a slow tokenizer instance to convert or \r\n(3) an equivalent slow tokenizer class to instantiate and convert. \r\nYou need to have sentencepiece installed to convert a slow tokenizer to a fast one.\r\n\r\n```",
"Thanks ! \r\nI think if you install `sentencepiece` the error should disapear\r\n`pip install sentencepiece`",
"Its works, thank you @younesbelkada "
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
## Environment info
transformers version: '4.26.1'
Platform: databricks
```
from transformers import pipeline
model_path = "cardiffnlp/twitter-xlm-roberta-base-sentiment"
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("T'estimo!")
```
## Who can help
@Narsil @ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
model_path = "cardiffnlp/twitter-xlm-roberta-base-sentiment"
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("T'estimo!")
```
### Expected behavior
[{'label': 'Positive', 'score': 0.6600581407546997}]
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21664/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21663
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21663/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21663/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21663/events
|
https://github.com/huggingface/transformers/issues/21663
| 1,587,609,803
|
I_kwDOCUB6oc5eoQDL
| 21,663
|
CUBLAS_STATUS_INVALID_VALUE when generating with OPT models
|
{
"login": "jchwenger",
"id": 34098722,
"node_id": "MDQ6VXNlcjM0MDk4NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/34098722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jchwenger",
"html_url": "https://github.com/jchwenger",
"followers_url": "https://api.github.com/users/jchwenger/followers",
"following_url": "https://api.github.com/users/jchwenger/following{/other_user}",
"gists_url": "https://api.github.com/users/jchwenger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jchwenger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jchwenger/subscriptions",
"organizations_url": "https://api.github.com/users/jchwenger/orgs",
"repos_url": "https://api.github.com/users/jchwenger/repos",
"events_url": "https://api.github.com/users/jchwenger/events{/privacy}",
"received_events_url": "https://api.github.com/users/jchwenger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @jchwenger ๐ \r\n\r\nI was able to run the script you shared without bugs on my end. Looking at other threads online, it may be due to an incorrect environment on your end -- check this thread from the comment I link [here](https://discuss.pytorch.org/t/runtimeerror-cuda-error-cublas-status-invalid-value-when-calling-cublassgemm-handle-opa-opb-m-n-k-alpha-a-lda-b-ldb-beta-c-ldc/124544/18) ๐ค ",
"Hi @gante,\r\n\r\nOh, I see, thanks a lot for this reference, I'll investigate.",
"Hi again @gante, thanks for the help, I had a PyTorch/Cuda mismatch, after reinstall using the command from the website l it works!"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
Hi,
I'm encountering an error when trying to do text generation with the OPT models. Here are the specs and the error, and below the steps to reproduce.
Ubuntu 18.04
Conda env:
torch 1.13.1 pypi_0 pypi
transformers 4.26.1 pypi_0 pypi
Python 3.9.7
The error:
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-45f0d829e48e> in <module>
6 prompt = "Hello, I am conscious and"
7 input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
----> 8 generated_ids = model.generate(input_ids)
9 text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
10 print(text)
~/anaconda3/envs/torch/lib/python3.9/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/generation/utils.py in generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs)
1389
1390 # 11. run greedy search
-> 1391 return self.greedy_search(
1392 input_ids,
1393 logits_processor=logits_processor,
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/generation/utils.py in greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_s
tates, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
2177
2178 # forward pass to get next token
-> 2179 outputs = self(
2180 **model_inputs,
2181 return_dict=True,
~/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py in forward(self, input_ids, attention_mask, head_mask, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
930
931 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
--> 932 outputs = self.model.decoder(
933 input_ids=input_ids,
934 attention_mask=attention_mask,
~/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py in forward(self, input_ids, attention_mask, head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states,
return_dict)
695 else:
696
--> 697 layer_outputs = decoder_layer(
698 hidden_states,
699 attention_mask=attention_mask,
~/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py in forward(self, hidden_states, attention_mask, layer_head_mask, output_attentions, use_cache, past_key_value)
324
325 # Self Attention
--> 326 hidden_states, self_attn_weights, present_key_value = self.self_attn(
327 hidden_states=hidden_states, 328 past_key_value=past_key_value,
~/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)
206
207 src_len = key_states.size(1)
--> 208 attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
209
210 if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasGemmStridedBatchedExFix( handle, opa, opb, m, n, k, (void*)(&falpha), a, CUDA_R_16F, lda, stridea, b, CUDA_R_16F, ldb, strideb, (void*)(&fbeta), c, CUDA_R_
16F, ldc, stridec, num_batches, CUDA_R_32F, CUBLAS_GEMM_DEFAULT_TENSOR_OP)`
```
### Who can help?
@ArthurZucker, @younesbelkada, @sgugger, @sgugger, @muellerzr, @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In an environment with PyTorch and Transformers, open an IPython console and paste the example [from the documentation](https://huggingface.co/facebook/opt-66b) (here with a 1.3b model, but I first tried the 6.7b):
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b", torch_dtype=torch.float16).cuda() # I also tried without half-precision
# the fast tokenizer currently does not work correctly
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b", use_fast=False)
prompt = "Hello, I am conscious and"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
generated_ids = model.generate(input_ids)
text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(text)
```
### Expected behavior
The model should do inference and generate text out of the box.
Any help would be greatly appreciated, thanks for reading!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21663/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21662
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21662/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21662/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21662/events
|
https://github.com/huggingface/transformers/issues/21662
| 1,587,505,992
|
I_kwDOCUB6oc5en2tI
| 21,662
|
Remote code is loaded from `main` even when revision is provided
|
{
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Will have a look even if you didn't properly tag me :-p ",
"The PR linked above fixes the config problem (I can load it with `AutoConfig`). The model still won't load however as the auto mpa of that config doesn't contain an entry for `AutoModelForCausalLM`."
] | 1,676
| 1,676
| 1,676
|
MEMBER
| null |
### System Info
When specifying a branch to load a model with remote code as follows fails because there is no modeling file on `main`. Is this a bug or the expected behaviour?
### Who can help?
The one and only _**@sgugger**_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
model = transformers.AutoModelForCausalLM.from_pretrained("bigcode/santacoder-fast-inference", revision="main_custom", trust_remote_code=True)
```
The following error shows that the code file is attempted to be loaded from `main` instead of `main_custom` (where a modeling file is present:
```bash
Could not locate the configuration_gpt_bigcode.py inside bigcode/santacoder-fast-inference.
Traceback (most recent call last):
File "/work/arjunguha-research-group/arjun/venvs/bigcode/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 264, in hf_raise_for_status
response.raise_for_status()
File "/shared/centos7/python/3.8.1/lib/python3.8/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/bigcode/santacoder-fast-inference/resolve/main/configuration_gpt_bigcode.py
```
### Expected behavior
Loading without error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21662/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21661
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21661/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21661/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21661/events
|
https://github.com/huggingface/transformers/issues/21661
| 1,587,298,334
|
I_kwDOCUB6oc5enEAe
| 21,661
|
gpt2 can't be trained for QA ?
|
{
"login": "ucas010",
"id": 50656998,
"node_id": "MDQ6VXNlcjUwNjU2OTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/50656998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ucas010",
"html_url": "https://github.com/ucas010",
"followers_url": "https://api.github.com/users/ucas010/followers",
"following_url": "https://api.github.com/users/ucas010/following{/other_user}",
"gists_url": "https://api.github.com/users/ucas010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ucas010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ucas010/subscriptions",
"organizations_url": "https://api.github.com/users/ucas010/orgs",
"repos_url": "https://api.github.com/users/ucas010/repos",
"events_url": "https://api.github.com/users/ucas010/events{/privacy}",
"received_events_url": "https://api.github.com/users/ucas010/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"As said multiple times in the past, please use the [forums](https://discuss.huggingface.co/) for questions like this.",
"@sgugger this is a bug, not question",
"I believe it's because `gpt2` doesn't have a `QuestionAnswering` head(like `GPTJForQuestionAnswering`), I would be happy to implement that if @sgugger approves the addition. ",
"I don't see a bug. GPT-2 is not meant to be used for question-answering. You can find the list of architectures that support this task by reading the error message or having a look at the question-answering [task page](https://huggingface.co/docs/transformers/main/en/tasks/question_answering) in the doc (first tip in green).\r\n\r\n@susnato Decoder models perform really poorly on this task, so there is no point adding GPT2ForQuestionAnswering IMO.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> I don't see a bug. GPT-2 is not meant to be used for question-answering. You can find the list of architectures that support this task by reading the error message or having a look at the question-answering [task page](https://huggingface.co/docs/transformers/main/en/tasks/question_answering) in the doc (first tip in green).\r\n> \r\n> @susnato Decoder models perform really poorly on this task, so there is no point adding GPT2ForQuestionAnswering IMO.\r\n\r\nIs it worth including in the library for completeness? I'm trying to use the Cerebras-GPT model suite for some Question Answering tasks and they inherit from the GPT2Model class. Could we still include it?",
"> I don't see a bug. GPT-2 is not meant to be used for question-answering. You can find the list of architectures that support this task by reading the error message or having a look at the question-answering [task page](https://huggingface.co/docs/transformers/main/en/tasks/question_answering) in the doc (first tip in green).\r\n> \r\n> @susnato Decoder models perform really poorly on this task, so there is no point adding GPT2ForQuestionAnswering IMO.\r\n\r\nquestion answer task page mentions support for GPT2Model class, is that a bug?? ",
"@kumaramit003 Support for question-answering for the GPT-2 model was added recently in #23030 "
] | 1,676
| 1,684
| 1,679
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
code: similar to [link](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)
but the model is changed to gpt2,
```
python run_qa.py \
--model_name_or_path gpt2 \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
or
```
python run_seq2seq_qa.py \
--model_name_or_path gpt2 \
--dataset_name squad_v2 \
--context_column context \
--question_column question \
--answer_column answers \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_seq2seq_squad/
```
ValueError: Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForQuestionAnswering.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, CamembertConfig, CanineConfig, ConvBertConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, ElectraConfig, ErnieConfig, FlaubertConfig, FNetConfig, FunnelConfig, GPTJConfig, IBertConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LiltConfig, LongformerConfig, LukeConfig, LxmertConfig, MarkupLMConfig, MBartConfig, MegatronBertConfig, MobileBertConfig, MPNetConfig, MvpConfig, NezhaConfig, NystromformerConfig, OPTConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, SplinterConfig, SqueezeBertConfig, XLMConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, YosoConfig.
### Expected behavior
looking forward to your kind reply
thx
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21661/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21661/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21660
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21660/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21660/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21660/events
|
https://github.com/huggingface/transformers/pull/21660
| 1,587,120,088
|
PR_kwDOCUB6oc5KGc5i
| 21,660
|
make opt checkpoint dir name correct
|
{
"login": "dumpmemory",
"id": 64742282,
"node_id": "MDQ6VXNlcjY0NzQyMjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/64742282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dumpmemory",
"html_url": "https://github.com/dumpmemory",
"followers_url": "https://api.github.com/users/dumpmemory/followers",
"following_url": "https://api.github.com/users/dumpmemory/following{/other_user}",
"gists_url": "https://api.github.com/users/dumpmemory/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dumpmemory/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dumpmemory/subscriptions",
"organizations_url": "https://api.github.com/users/dumpmemory/orgs",
"repos_url": "https://api.github.com/users/dumpmemory/repos",
"events_url": "https://api.github.com/users/dumpmemory/events{/privacy}",
"received_events_url": "https://api.github.com/users/dumpmemory/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21660). All of your documentation changes will be reflected on that endpoint.",
"cc @pacman100 ",
"Friendly ping @pacman100 ",
"@pacman100 ?",
"Hello, I will look into this tomorrow, thank you for your patience and sorry for the delay."
] | 1,676
| 1,683
| 1,683
|
CONTRIBUTOR
| null |
# What does this PR do?
I found i can't load the converted checkpoint with tp_8 pp_1 dp_1 or tp_4 pp_2 dp_1, only tp 2 pp 2 dp 2 works. I check the code and found it might be the opt dir name issue. so i just fix it. and it works on my side.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/accelerate/issues/1088
#https://github.com/huggingface/accelerate/issues/1088
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21660/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21660",
"html_url": "https://github.com/huggingface/transformers/pull/21660",
"diff_url": "https://github.com/huggingface/transformers/pull/21660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21660.patch",
"merged_at": 1683638042000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21659
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21659/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21659/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21659/events
|
https://github.com/huggingface/transformers/issues/21659
| 1,587,085,487
|
I_kwDOCUB6oc5emQCv
| 21,659
|
KeyError: 'eval_f1' when hyperparameter tuning distilroberta with raytune and population based training
|
{
"login": "aelb66",
"id": 75398560,
"node_id": "MDQ6VXNlcjc1Mzk4NTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/75398560?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aelb66",
"html_url": "https://github.com/aelb66",
"followers_url": "https://api.github.com/users/aelb66/followers",
"following_url": "https://api.github.com/users/aelb66/following{/other_user}",
"gists_url": "https://api.github.com/users/aelb66/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aelb66/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aelb66/subscriptions",
"organizations_url": "https://api.github.com/users/aelb66/orgs",
"repos_url": "https://api.github.com/users/aelb66/repos",
"events_url": "https://api.github.com/users/aelb66/events{/privacy}",
"received_events_url": "https://api.github.com/users/aelb66/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to help debug your code as we keep issues for bugs and feature requests only. Here the metric you are using with `metric_for_best_model=\"eval_f1\"` does not exist as your `compute_metrics` function only returns the following keys:\r\n```py\r\n{\r\n 'macro_f1' : macro_f1, \r\n 'macro_precision': macro_precision,\r\n 'macro_recall': macro_recall,\r\n 'balanced_accuracy': acc\r\n }\r\n```\r\n`eval_macro_f1` would work better.\r\n",
"Thank you and sorry about that!"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
## Who can help:
ray/raytune: @richardliaw, @amogkam
trainer: @sgugger
## Information
I'm trying to hyperparameter tune DistilRoberta with RayTune using PBT and HuggingFace Trainer API. Im using Google Colab with 1 GPU to tune the model.
## Error Message
All my trials have the same error, this is just the error for the 3rd trial
```
`(_objective pid=42097)
91%|โโโโโโโโโ | 50/55 [00:05<00:00, 8.73it/s]
(_objective pid=42097)
93%|โโโโโโโโโโ| 51/55 [00:05<00:00, 8.72it/s]
(_objective pid=42097)
95%|โโโโโโโโโโ| 52/55 [00:05<00:00, 8.72it/s]
(_objective pid=42097)
96%|โโโโโโโโโโ| 53/55 [00:05<00:00, 8.73it/s]
(_objective pid=42097)
98%|โโโโโโโโโโ| 54/55 [00:06<00:00, 8.73it/s]
25%|โโโ | 438/1752 [02:43<07:14, 3.02it/s]
100%|โโโโโโโโโโ| 55/55 [00:06<00:00, 8.73it/s]
2023-02-16 05:38:41,958 ERROR trial_runner.py:1088 -- Trial _objective_f0650_00002: Error processing event.
ray.exceptions.RayTaskError(KeyError): ray::ImplicitFunc.train() (pid=42097, ip=172.28.0.12, repr=_objective)
File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable/trainable.py", line 367, in train
raise skipped from exception_cause(skipped)
File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable/function_trainable.py", line 335, in entrypoint
return self._trainable_func(
File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable/function_trainable.py", line 652, in _trainable_func
output = fn()
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 332, in dynamic_modules_import_trainable
return trainable(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable/util.py", line 386, in inner
return trainable(config, **fn_kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 233, in _objective
local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1543, in train
return inner_training_loop(
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1883, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2132, in _maybe_log_save_evaluate
self._report_to_hp_search(trial, self.state.global_step, metrics)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1229, in _report_to_hp_search
self.objective = self.compute_objective(metrics.copy())
File "<ipython-input-27-3decc80fdc30>", line 3, in my_objective
KeyError: 'eval_f1'
(_objective pid=42097) precision recall f1-score support
(_objective pid=42097)
(_objective pid=42097) 0 0.812 0.745 0.777 145
(_objective pid=42097) 1 0.650 0.988 0.784 173
(_objective pid=42097) 2 0.614 0.906 0.732 128
(_objective pid=42097) 3 0.850 0.936 0.891 109
(_objective pid=42097) 4 0.593 0.273 0.374 187
(_objective pid=42097) 5 0.818 0.115 0.202 78
(_objective pid=42097) 6 0.584 0.653 0.617 239
(_objective pid=42097) 7 0.606 0.434 0.506 99
(_objective pid=42097) 8 0.596 0.738 0.659 408
(_objective pid=42097) 9 0.564 0.175 0.267 126
(_objective pid=42097) 10 0.731 0.831 0.778 59
(_objective pid=42097)
(_objective pid=42097) accuracy 0.644 1751
(_objective pid=42097) macro avg 0.674 0.618 0.599 1751
(_objective pid=42097) weighted avg 0.647 0.644 0.612 1751
(_objective pid=42097)
(_objective pid=42097) {'eval_loss': 1.00838041305542, 'eval_macro_f1': 0.598748987658475, 'eval_macro_precision': 0.6744155267629177, 'eval_macro_recall': 0.6175753130931809, 'eval_balanced_accuracy': 0.6175753130931809, 'eval_runtime': 6.2857, 'eval_samples_per_second': 278.568, 'eval_steps_per_second': 8.75, 'epoch': 1.0}
(pid=43454) 2023-02-16 05:38:44.739485: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
(pid=43454) 2023-02-16 05:38:44.739641: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
(pid=43454) 2023-02-16 05:38:44.739655: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
== Status ==
Current time: 2023-02-16 05:38:47 (running for 00:08:54.32)
Memory usage on this node: 13.8/83.5 GiB
PopulationBasedTraining: 0 checkpoints, 0 perturbs
Resources requested: 10.0/12 CPUs, 1.0/1 GPUs, 0.0/49.72 GiB heap, 0.0/24.86 GiB objects
Result logdir: /content/ray_results/tune_transformer_pbt
Number of trials: 20/50 (3 ERROR, 16 PENDING, 1 RUNNING)
+------------------------+----------+-------------------+-----------------+--------------------+----------------+
| Trial name | status | loc | learning_rate | num_train_epochs | weight_decay |
|------------------------+----------+-------------------+-----------------+--------------------+----------------|
| _objective_f0650_00003 | RUNNING | 172.28.0.12:43454 | 4.87964e-05 | 4 | 0.0102922 |
| _objective_f0650_00004 | PENDING | | 1.00312e-05 | 5 | 0.469276 |
| _objective_f0650_00005 | PENDING | | 2.21697e-05 | 5 | 0.0917023 |
| _objective_f0650_00006 | PENDING | | 2.16492e-05 | 6 | 0.215973 |
| _objective_f0650_00007 | PENDING | | 1.18666e-05 | 4 | 0.19993 |
| _objective_f0650_00008 | PENDING | | 2.82428e-05 | 5 | 0.183181 |
| _objective_f0650_00009 | PENDING | | 4.93292e-05 | 4 | 0.191231 |
| _objective_f0650_00010 | PENDING | | 3.43018e-05 | 2 | 0.0232252 |
| _objective_f0650_00011 | PENDING | | 1.05306e-05 | 6 | 0.22525 |
| _objective_f0650_00012 | PENDING | | 4.23359e-05 | 2 | 0.482816 |
| _objective_f0650_00013 | PENDING | | 1.92358e-05 | 2 | 0.00798313 |
| _objective_f0650_00014 | PENDING | | 1.48815e-05 | 5 | 0.220076 |
| _objective_f0650_00015 | PENDING | | 1.69346e-05 | 6 | 0.416597 |
| _objective_f0650_00016 | PENDING | | 3.65009e-05 | 2 | 0.12939 |
| _objective_f0650_00017 | PENDING | | 1.83177e-05 | 3 | 0.212578 |
| _objective_f0650_00018 | PENDING | | 4.87834e-05 | 5 | 0.0924272 |
| _objective_f0650_00019 | PENDING | | 2.5806e-05 | 3 | 0.224877 |
| _objective_f0650_00000 | ERROR | 172.28.0.12:39382 | 3.92798e-05 | 5 | 0.475357 |
| _objective_f0650_00001 | ERROR | 172.28.0.12:40740 | 2.78333e-05 | 6 | 0.298425 |
| _objective_f0650_00002 | ERROR | 172.28.0.12:42097 | 2.33483e-05 | 4 | 0.229624 |
+------------------------+----------+-------------------+-----------------+--------------------+----------------+
Number of errored trials: 3
+------------------------+--------------+---------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|------------------------+--------------+---------------------------------------------------------------------------------------------------------------------|
| _objective_f0650_00000 | 1 | /content/ray_results/tune_transformer_pbt/_objective_f0650_00000_0_num_train_epochs=5_2023-02-16_05-29-52/error.txt |
| _objective_f0650_00001 | 1 | /content/ray_results/tune_transformer_pbt/_objective_f0650_00001_1_num_train_epochs=6_2023-02-16_05-32-48/error.txt |
| _objective_f0650_00002 | 1 | /content/ray_results/tune_transformer_pbt/_objective_f0650_00002_2_num_train_epochs=4_2023-02-16_05-35-45/error.txt |
+------------------------+--------------+---------------------------------------------------------------------------------------------------------------------+
```
## Hyperparameter tuning code
```
def compute_metrics(p):
...
return {
'macro_f1' : macro_f1,
}
#hyperparameter tuning configs
def my_objective(metrics):
return metrics["eval_f1"]
training_args = TrainingArguments(
...
metric_for_best_model="eval_f1",
)
def model_init():
return AutoModelForSequenceClassification.from_pretrained(model_checkpoint,num_labels=11,)
trainer = Trainer(
...
compute_metrics=compute_metrics,
)
tune_config = {
"per_device_train_batch_size": 32,
}
scheduler = PopulationBasedTraining(
...
metric="eval_f1",
...
#hyperparameter search
best_run = trainer.hyperparameter_search(
hp_space=lambda _: tune_config,
compute_objective = my_objective,
...
)
```
Any help would be greatly appreciated!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21659/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21658
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21658/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21658/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21658/events
|
https://github.com/huggingface/transformers/pull/21658
| 1,587,044,502
|
PR_kwDOCUB6oc5KGMsj
| 21,658
|
Bump werkzeug from 2.0.3 to 2.2.3 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.0.3 to 2.2.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pallets/werkzeug/releases">werkzeug's releases</a>.</em></p>
<blockquote>
<h2>2.2.3</h2>
<p>This is a fix release for the 2.2.x release branch.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-3">https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-3</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/26?closed=1">https://github.com/pallets/werkzeug/milestone/26?closed=1</a></li>
</ul>
<p>This release contains security fixes for:</p>
<ul>
<li><a href="https://github.com/pallets/werkzeug/security/advisories/GHSA-xg9f-g7g7-2323">https://github.com/pallets/werkzeug/security/advisories/GHSA-xg9f-g7g7-2323</a></li>
<li><a href="https://github.com/pallets/werkzeug/security/advisories/GHSA-px8h-6qxv-m22q">https://github.com/pallets/werkzeug/security/advisories/GHSA-px8h-6qxv-m22q</a></li>
</ul>
<h2>2.2.2</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/werkzeug/releases/tag/2.2.0">2.2.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-2">https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-2</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/25?closed=1">https://github.com/pallets/werkzeug/milestone/25?closed=1</a></li>
</ul>
<h2>2.2.1</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/werkzeug/releases/tag/2.2.0">2.2.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-1">https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-1</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/24?closed=1">https://github.com/pallets/werkzeug/milestone/24?closed=1</a></li>
</ul>
<h2>2.2.0</h2>
<p>This is a feature release, which includes new features and removes previously deprecated features. The 2.2.x branch is now the supported bugfix branch, the 2.1.x branch will become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as <a href="https://pypi.org/project/pip-tools/">pip-tools</a> to pin all dependencies and control upgrades.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-0">https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-0</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/20?closed=1">https://github.com/pallets/werkzeug/milestone/20?closed=1</a></li>
</ul>
<h2>2.1.2</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/werkzeug/releases/tag/2.1.0">2.1.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-2">https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-2</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/22?closed=1">https://github.com/pallets/werkzeug/milestone/22?closed=1</a></li>
</ul>
<h2>2.1.1</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/werkzeug/releases/tag/2.1.0">2.1.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-1">https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-1</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/19?closed=1">https://github.com/pallets/werkzeug/milestone/19?closed=1</a></li>
</ul>
<h2>2.1.0</h2>
<p>This is a feature release, which includes new features and removes previously deprecated features. The 2.1.x branch is now the supported bugfix branch, the 2.0.x branch will become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as <a href="https://pypi.org/project/pip-tools/">pip-tools</a> to pin all dependencies and control upgrades.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-0">https://werkzeug.palletsprojects.com/en/2.1.x/changes/#version-2-1-0</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/16?closed=1">https://github.com/pallets/werkzeug/milestone/16?closed=1</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pallets/werkzeug/blob/main/CHANGES.rst">werkzeug's changelog</a>.</em></p>
<blockquote>
<h2>Version 2.2.3</h2>
<p>Released 2023-02-14</p>
<ul>
<li>Ensure that URL rules using path converters will redirect with strict slashes when
the trailing slash is missing. :issue:<code>2533</code></li>
<li>Type signature for <code>get_json</code> specifies that return type is not optional when
<code>silent=False</code>. :issue:<code>2508</code></li>
<li><code>parse_content_range_header</code> returns <code>None</code> for a value like <code>bytes */-1</code>
where the length is invalid, instead of raising an <code>AssertionError</code>. :issue:<code>2531</code></li>
<li>Address remaining <code>ResourceWarning</code> related to the socket used by <code>run_simple</code>.
Remove <code>prepare_socket</code>, which now happens when creating the server. :issue:<code>2421</code></li>
<li>Update pre-existing headers for <code>multipart/form-data</code> requests with the test
client. :issue:<code>2549</code></li>
<li>Fix handling of header extended parameters such that they are no longer quoted.
:issue:<code>2529</code></li>
<li><code>LimitedStream.read</code> works correctly when wrapping a stream that may not return
the requested size in one <code>read</code> call. :issue:<code>2558</code></li>
<li>A cookie header that starts with <code>=</code> is treated as an empty key and discarded,
rather than stripping the leading <code>==</code>.</li>
<li>Specify a maximum number of multipart parts, default 1000, after which a
<code>RequestEntityTooLarge</code> exception is raised on parsing. This mitigates a DoS
attack where a larger number of form/file parts would result in disproportionate
resource use.</li>
</ul>
<h2>Version 2.2.2</h2>
<p>Released 2022-08-08</p>
<ul>
<li>Fix router to restore the 2.1 <code>strict_slashes == False</code> behaviour
whereby leaf-requests match branch rules and vice
versa. :pr:<code>2489</code></li>
<li>Fix router to identify invalid rules rather than hang parsing them,
and to correctly parse <code>/</code> within converter arguments. :pr:<code>2489</code></li>
<li>Update subpackage imports in :mod:<code>werkzeug.routing</code> to use the
<code>import as</code> syntax for explicitly re-exporting public attributes.
:pr:<code>2493</code></li>
<li>Parsing of some invalid header characters is more robust. :pr:<code>2494</code></li>
<li>When starting the development server, a warning not to use it in a
production deployment is always shown. :issue:<code>2480</code></li>
<li><code>LocalProxy.__wrapped__</code> is always set to the wrapped object when
the proxy is unbound, fixing an issue in doctest that would cause it
to fail. :issue:<code>2485</code></li>
<li>Address one <code>ResourceWarning</code> related to the socket used by
<code>run_simple</code>. :issue:<code>2421</code></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pallets/werkzeug/commit/22a254fca2ad0130adbbcbd11d3de51bcb04a08b"><code>22a254f</code></a> release version 2.2.3</li>
<li><a href="https://github.com/pallets/werkzeug/commit/517cac5a804e8c4dc4ed038bb20dacd038e7a9f1"><code>517cac5</code></a> Merge pull request from GHSA-xg9f-g7g7-2323</li>
<li><a href="https://github.com/pallets/werkzeug/commit/babc8d9e8c9fa995ef26050698bc9b5a92803664"><code>babc8d9</code></a> rewrite docs about request data limits</li>
<li><a href="https://github.com/pallets/werkzeug/commit/09449ee77934a0c883f5959785864ecae6aaa2c9"><code>09449ee</code></a> clean up docs</li>
<li><a href="https://github.com/pallets/werkzeug/commit/fe899d0cdf767a7289a8bf746b7f72c2907a1b4b"><code>fe899d0</code></a> limit the maximum number of multipart form parts</li>
<li><a href="https://github.com/pallets/werkzeug/commit/cf275f42acad1b5950c50ffe8ef58fe62cdce028"><code>cf275f4</code></a> Merge pull request from GHSA-px8h-6qxv-m22q</li>
<li><a href="https://github.com/pallets/werkzeug/commit/8c2b4b82d0cade0d37e6a88e2cd2413878e8ebd4"><code>8c2b4b8</code></a> don't strip leading = when parsing cookie</li>
<li><a href="https://github.com/pallets/werkzeug/commit/7c7ce5cb73f3f7d3b9c09340e4f322aeb583dbc5"><code>7c7ce5c</code></a> [pre-commit.ci] pre-commit autoupdate (<a href="https://github-redirect.dependabot.com/pallets/werkzeug/issues/2585">#2585</a>)</li>
<li><a href="https://github.com/pallets/werkzeug/commit/19ae03e6a39b3f63fd08fef4fddae4385cdddf25"><code>19ae03e</code></a> [pre-commit.ci] auto fixes from pre-commit.com hooks</li>
<li><a href="https://github.com/pallets/werkzeug/commit/a83d3b8bf070810874c8e8d03dcce270666e10fe"><code>a83d3b8</code></a> [pre-commit.ci] pre-commit autoupdate</li>
<li>Additional commits viewable in <a href="https://github.com/pallets/werkzeug/compare/2.0.3...2.2.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21658/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21658",
"html_url": "https://github.com/huggingface/transformers/pull/21658",
"diff_url": "https://github.com/huggingface/transformers/pull/21658.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21658.patch",
"merged_at": 1676557423000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21657
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21657/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21657/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21657/events
|
https://github.com/huggingface/transformers/pull/21657
| 1,587,029,997
|
PR_kwDOCUB6oc5KGJex
| 21,657
|
[Examples] TPU-based training of a language model using TensorFlow
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
},
{
"id": 5160774128,
"node_id": "LA_kwDOCUB6oc8AAAABM5sp8A",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TPU",
"name": "TPU",
"color": "EF97D1",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Rocketknight1 I incorporated the `group_texts()` utility that we discussed over Slack. Let me know if the changes look good to you. Most of it is copy-pasted from [here](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). \r\n\r\n[Here's](https://colab.research.google.com/gist/sayakpaul/adfaa9b45c9b56f222487995d0971645/scratchpad.ipynb) Colab Notebook where I verified these. ",
"@Rocketknight1 I took a deeper look into the TFRecord preparation script. I don't understand why there's a discrepancy in the following. \r\n\r\nWhile serializing the TFRecords, I am making each TFRecord shard has got a specific number of samples. When there are lesser samples for a TFRecord shard than the specified amount, that's fine. \r\n\r\nBut when I load the TFRecords back and create a `tf.data.Dataset` out of them, the number of entries in the dataset (before batching) is much lesser. \r\n\r\n\r\nHere is a minimal Colab Notebook that demonstrates the issue: https://colab.research.google.com/gist/sayakpaul/b4b02f3f656c0041c93f6ba78c8e65fd/scratchpad.ipynb.\r\n\r\nWhen you get a moment, could you take a look? ",
"Thanks @Rocketknight1 for your help in debugging https://github.com/huggingface/transformers/pull/21657#issuecomment-1468086926 (discussed internally via Slack). I am currently regenerating the TFRecord shards. I will update here once that's done.",
"@Rocketknight1 corrected TFRecord shards have been pushed to `gs://tf-tpu-training-resources`.\r\n\r\nHere are the record counts per split:\r\n\r\n* Train: 300917\r\n* Validation: 626\r\n* Test: 722\r\n\r\nThe TFRecords were generated with a block size of 512. ",
"@Rocketknight1 the training code looks good to me, except for a few things:\r\n\r\n* Maybe we should scale the LR with the batch size? \r\n* Take `mlm_probability` as a CLI arg? \r\n* Modularize the dataset preparation code a bit? \r\n\r\nBut all these are non-blockers. Let's do 4 - 5 training runs varying the number of epochs and the learning rate. ",
"@sayakpaul MLM probability added as an arg and I modularized the loading!",
"@Rocketknight1 started a training run with:\r\n\r\n```bash\r\npython3 train_model.py \\\r\n --tokenizer tf-tpu/unigram-tokenizer-wikitext \\\r\n --per_replica_batch_size 64 \\\r\n --tpu_name local --tpu_zone us-central1 --gcp_project huggingface-ml --bfloat16 \\\r\n --train_dataset gs://tf-tpu-training-resources/train --eval_dataset gs://tf-tpu-training-resources/validation \\\r\n --num_epochs 100 \\\r\n --output_dir roberta-base-epochs-100 --hub_model_id tf-tpu/roberta-base-epochs-100\r\n```",
"@Rocketknight1 here's the final model trained with the command from [here](https://github.com/huggingface/transformers/pull/21657#issuecomment-1483729424):\r\n\r\nhttps://huggingface.co/tf-tpu/roberta-base-epochs-100\r\n\r\nWhen you try out examples in the widget of the model page ^, pass `[MASK]` instead of the default `<mask>`. The results are far from perfect (evident from the validation accuracy), though. ",
"@Rocketknight1 could you review [this PR](https://huggingface.co/tf-tpu/roberta-base-epochs-500-no-wd/discussions/1)? ",
"@sgugger thanks!\r\n\r\nI addressed your comments. For https://github.com/huggingface/transformers/pull/21657#discussion_r1164017322, I will defer to @Rocketknight1. ",
"Merging since the failing tests are unrelated. "
] | 1,676
| 1,681
| 1,681
|
MEMBER
| null |
This PR adds an example of performing (masked) language model training using TensorFlow and TPUs. The example is meant to act as a reference for the community on this topic. The following are the main components of the PR:
* Tokenizer training script (for completeness)
* TFRecords preparation script (recommended practice when using TPUs)
* Training script
* Evaluation / inference
The purpose of this separation (as opposed to having everything in a single script) is to allow the community to have isolated reference points for performing TPU-based training of our models, which I think is beneficial.
The artifacts produced during this project can be found here: https://huggingface.co/tf-tpu.
* Tokenizer (trained from scratch): https://huggingface.co/tf-tpu/unigram-tokenizer-wikitext
* Model: https://huggingface.co/tf-tpu/roberta-base-epochs-500-no-wd
Cc: @Rocketknight1 @gante @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21657/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21657/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21657",
"html_url": "https://github.com/huggingface/transformers/pull/21657",
"diff_url": "https://github.com/huggingface/transformers/pull/21657.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21657.patch",
"merged_at": 1681449062000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21656
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21656/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21656/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21656/events
|
https://github.com/huggingface/transformers/issues/21656
| 1,586,947,030
|
I_kwDOCUB6oc5eluPW
| 21,656
|
T5 int8 inference is not compatible with nvidia/apex
|
{
"login": "lukaemon",
"id": 1643232,
"node_id": "MDQ6VXNlcjE2NDMyMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1643232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukaemon",
"html_url": "https://github.com/lukaemon",
"followers_url": "https://api.github.com/users/lukaemon/followers",
"following_url": "https://api.github.com/users/lukaemon/following{/other_user}",
"gists_url": "https://api.github.com/users/lukaemon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukaemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukaemon/subscriptions",
"organizations_url": "https://api.github.com/users/lukaemon/orgs",
"repos_url": "https://api.github.com/users/lukaemon/repos",
"events_url": "https://api.github.com/users/lukaemon/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukaemon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @lukaemon \r\nYes this is a known issue, currently int8 and apex are not supported together, the fix I can propose now is to disable apex by uninstalling it until we found a proper fix!\r\nThanks a lot",
"Will do. Thanks. "
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.14.0a0+44dac51 (True)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.4 (gpu)
- Jax version: 0.4.3
- JaxLib version: 0.4.3
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Copy paste int8 inference code from flan-t5-xxl page:
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Error message:
```
โ /usr/local/lib/python3.8/dist-packages/apex/normalization/fused_layer_norm.py:69 in forward โ
โ โ
โ 66 โ โ ctx.eps = eps โ
โ 67 โ โ input_ = input.contiguous() โ
โ 68 โ โ weight_ = weight.contiguous() โ
โ โฑ 69 โ โ output, invvar = fused_layer_norm_cuda.rms_forward_affine( โ
โ 70 โ โ โ input_, ctx.normalized_shape, weight_, ctx.eps) โ
โ 71 โ โ ctx.save_for_backward(input_, weight_, invvar) โ
โ 72 โ โ return output โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: expected scalar type Float but found Half
```
If I understand correctly, apex is triggered automatically, ref T5 doc:
> If youโd like a faster training and inference performance, install [apex](https://github.com/NVIDIA/apex#quick-start) and then the model will automatically use apex.normalization.FusedRMSNorm instead of T5LayerNorm. The former uses an optimized fused kernel which is several times faster than the latter.
Any work around?
Should I turn off apex during `load_in_8bit` to use default layernorm? How?
### Expected behavior
Want to work with int8 inference. Is it a bug or what did I miss? Thanks.
Ideally still want to keep apex during training. And turn it off during int8 inference.
Remove it definitely works, but -10% tflops at training from a quick benchmark.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21656/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21655
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21655/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21655/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21655/events
|
https://github.com/huggingface/transformers/pull/21655
| 1,586,867,970
|
PR_kwDOCUB6oc5KFmUP
| 21,655
|
[bloom] gradient_checkpointing fix
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
The `BloomBlock.forward`'s args signature is:
https://github.com/huggingface/transformers/blob/9d1116e9951686f937d17697820117636bfc05a5/src/transformers/models/bloom/modeling_bloom.py#L417-L425
but when it's called in `gradient_checkpointing` code this is used:
https://github.com/huggingface/transformers/blob/9d1116e9951686f937d17697820117636bfc05a5/src/transformers/models/bloom/modeling_bloom.py#L772-L778
so unless I'm mistaken `head_mask` is passed as `layer_past`.
This PR re-injects the missing `layer_past` arg.
I see that there are tests that test the overall feature, but I haven't looked closely at what they test. Perhaps it happens that `head_mask` isn't being used, so it happens to work w/ it.
@younesbelkada, could you please check if this is an omission or it wasn't passed for a specific reason?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21655/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21655",
"html_url": "https://github.com/huggingface/transformers/pull/21655",
"diff_url": "https://github.com/huggingface/transformers/pull/21655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21655.patch",
"merged_at": 1676566639000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21653
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21653/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21653/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21653/events
|
https://github.com/huggingface/transformers/pull/21653
| 1,586,759,543
|
PR_kwDOCUB6oc5KFPEO
| 21,653
|
[WhisperModel] fix bug in reshaping labels
|
{
"login": "jonatasgrosman",
"id": 5097052,
"node_id": "MDQ6VXNlcjUwOTcwNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonatasgrosman",
"html_url": "https://github.com/jonatasgrosman",
"followers_url": "https://api.github.com/users/jonatasgrosman/followers",
"following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}",
"gists_url": "https://api.github.com/users/jonatasgrosman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonatasgrosman/subscriptions",
"organizations_url": "https://api.github.com/users/jonatasgrosman/orgs",
"repos_url": "https://api.github.com/users/jonatasgrosman/repos",
"events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonatasgrosman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Also cc @ArthurZucker "
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
Currently, in the Whisper model's forward pass, the target `labels` are reshaped using the `view` method before being passed into the loss function:
https://github.com/huggingface/transformers/blob/1567bef3b35c51b7a3cc6b4edf243b208279155d/src/transformers/models/whisper/modeling_whisper.py#L1214
The view method requires the Torch Tensor to be contiguous, and certain operations are commonly performed on the labels that might cause them not to be contiguous.
So using the `view` can cause problems during model training. This issue has already been fixed on another model by the @sanchit-gandhi in a [PR](https://github.com/huggingface/transformers/pull/16748), and I'm just replicating the same solution (using `reshape()` instead of `view()`) here for the Whisper model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21653/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21653/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21653",
"html_url": "https://github.com/huggingface/transformers/pull/21653",
"diff_url": "https://github.com/huggingface/transformers/pull/21653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21653.patch",
"merged_at": 1676559646000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21652
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21652/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21652/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21652/events
|
https://github.com/huggingface/transformers/pull/21652
| 1,586,660,656
|
PR_kwDOCUB6oc5KE54K
| 21,652
|
refactor: Make direct_transformers_import util
|
{
"login": "connor-henderson",
"id": 78612354,
"node_id": "MDQ6VXNlcjc4NjEyMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-henderson",
"html_url": "https://github.com/connor-henderson",
"followers_url": "https://api.github.com/users/connor-henderson/followers",
"following_url": "https://api.github.com/users/connor-henderson/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions",
"organizations_url": "https://api.github.com/users/connor-henderson/orgs",
"repos_url": "https://api.github.com/users/connor-henderson/repos",
"events_url": "https://api.github.com/users/connor-henderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-henderson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,679
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Is this wanted? Will just close out if not.
Moves the common process of directly importing `transformers` to a utility function.
Related [PR](https://github.com/huggingface/transformers/pull/21651)
Related to this issue: #21645
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? Happy to write if wanted
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21652/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21652",
"html_url": "https://github.com/huggingface/transformers/pull/21652",
"diff_url": "https://github.com/huggingface/transformers/pull/21652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21652.patch",
"merged_at": 1676565153000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21651
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21651/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21651/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21651/events
|
https://github.com/huggingface/transformers/pull/21651
| 1,586,456,872
|
PR_kwDOCUB6oc5KEOBz
| 21,651
|
Update deprecated load_module
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
This PR updates the uses of `load_module` (which is going to be dropped in Python 3.12) to a non-deprecated API (this should work starting Python 3.5, so all good for Transformers).
Fixes #21645
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21651/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21651",
"html_url": "https://github.com/huggingface/transformers/pull/21651",
"diff_url": "https://github.com/huggingface/transformers/pull/21651.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21651.patch",
"merged_at": 1676494645000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21650
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21650/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21650/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21650/events
|
https://github.com/huggingface/transformers/issues/21650
| 1,586,454,864
|
I_kwDOCUB6oc5ej2FQ
| 21,650
|
Converting Megatron_T5 to HF_T5
|
{
"login": "WenzhengZhang",
"id": 45067787,
"node_id": "MDQ6VXNlcjQ1MDY3Nzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/45067787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WenzhengZhang",
"html_url": "https://github.com/WenzhengZhang",
"followers_url": "https://api.github.com/users/WenzhengZhang/followers",
"following_url": "https://api.github.com/users/WenzhengZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/WenzhengZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WenzhengZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WenzhengZhang/subscriptions",
"organizations_url": "https://api.github.com/users/WenzhengZhang/orgs",
"repos_url": "https://api.github.com/users/WenzhengZhang/repos",
"events_url": "https://api.github.com/users/WenzhengZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/WenzhengZhang/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] |
closed
| false
| null |
[] |
[
"Hello guys,\r\nI saw you closed this issue.\r\nDid you find any way to convert megatron t5 <-> hf T5?",
"I would be interested too.\r\nDid you find any solution?",
"any potential solutions?",
"Don't think this is supported yet, but feel free to open a pr if you want! ๐ค "
] | 1,676
| 1,701
| 1,677
|
NONE
| null |
Currently both [Megatron_BERT conversion](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py) and [Megatron_GPT conversion](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py) are supported. Do you have any plan to support T5 models conversion?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21650/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21649
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21649/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21649/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21649/events
|
https://github.com/huggingface/transformers/pull/21649
| 1,586,446,916
|
PR_kwDOCUB6oc5KELxk
| 21,649
|
Fix axial positional encoding calculations for reformer.mdx
|
{
"login": "ijindal",
"id": 19698647,
"node_id": "MDQ6VXNlcjE5Njk4NjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/19698647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ijindal",
"html_url": "https://github.com/ijindal",
"followers_url": "https://api.github.com/users/ijindal/followers",
"following_url": "https://api.github.com/users/ijindal/following{/other_user}",
"gists_url": "https://api.github.com/users/ijindal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ijindal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ijindal/subscriptions",
"organizations_url": "https://api.github.com/users/ijindal/orgs",
"repos_url": "https://api.github.com/users/ijindal/repos",
"events_url": "https://api.github.com/users/ijindal/events{/privacy}",
"received_events_url": "https://api.github.com/users/ijindal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
Fix axial positional encoding calculations
# What does this PR do?
This PR corrects the calculations for Axial Positional Encoding in Reformer model documentation.
- Since d = d_1 + d_2
- if d = 2^10 = 1024
- then
- d_1 and d_2 both should not be equal to 2^5. As 2^5 + 2^5 = 32+32 = 64 not equal to 1024.
- d_1 and d_2 should sum to 2^10. Therefore I fixed the d_1 and d_2 dimensions to be equal to 2^9.
- that is 2^9 + 2^9 = 1024.
and fixed the subsequent calculations.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21649/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21649",
"html_url": "https://github.com/huggingface/transformers/pull/21649",
"diff_url": "https://github.com/huggingface/transformers/pull/21649.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21649.patch",
"merged_at": 1676959192000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21648
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21648/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21648/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21648/events
|
https://github.com/huggingface/transformers/pull/21648
| 1,586,284,708
|
PR_kwDOCUB6oc5KDohi
| 21,648
|
Generate: PT Dynamo without graph breaks in the main greedy/sample loop
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,680
| 1,676
|
MEMBER
| null |
# What does this PR do?
This PR is part of our PT Dynamo + `.generate()` readiness.
Let's start with the basics:
1 - Calling generate after `torch.compile()` doesn't explode -- this PR fixes the error seen in https://github.com/pytorch/pytorch/issues/93042
2 - There are no graph breaks in the main generation loop, for greedy search and sample. [Graph breaks are a major source of slowdowns](https://pytorch.org/docs/master/dynamo/faq.html#why-am-i-not-seeing-speedups).
A quick run on GPT2 shows that we gain a ~1.5x speed with `torch.compile()` on `.generate()`, after these changes (~4 mins of compilation time). Please note that this is a quick check, and not a proper benchmark ;)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21648/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21648",
"html_url": "https://github.com/huggingface/transformers/pull/21648",
"diff_url": "https://github.com/huggingface/transformers/pull/21648.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21648.patch",
"merged_at": 1676492207000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21647
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21647/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21647/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21647/events
|
https://github.com/huggingface/transformers/pull/21647
| 1,586,074,330
|
PR_kwDOCUB6oc5KC6if
| 21,647
|
Skipping more high mem tests - Wav2Vec2 Hubert
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ydshieh ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
In #21643 there's some tests which I forgot to skip e.g. for Wav2Vec2 I skipped the high memory tests for `TFWav2Vec2ModelTest` but didn't added them to the other class `TFWav2Vec2RobustModelTest`. This means some tests on circleci still fail with processes crashing - cf [this run](https://app.circleci.com/pipelines/github/huggingface/transformers/57822/workflows/479318b1-d4b2-4f72-8d98-7b23dde142e8/jobs/702252). This PR adds a `unittest.skip` decorator to the missed tests.
On this run, `tests/models/wav2vec2_phoneme/test_tokenization_wav2vec2_phoneme.py::Wav2Vec2PhonemeCTCTokenizerTest::test_number_of_added_tokens` also failed with a crashing process. After inspecting, the following tests were also run on this process (`gw2`):
```
tests/models/wav2vec2_phoneme/test_tokenization_wav2vec2_phoneme.py
TFBartModelTest.test_keras_fit
TFBartModelTest.test_keras_save_load
TFGPTJModelTest.test_keras_save_load
TFMBartModelTest.test_keras_save_load
TFOpenAIGPTModelTest.test_keras_save_load
TFRemBertModelTest.test_keras_save_load
TFSwinModelTest.test_keras_save_load
Wav2Vec2PhonemeCTCTokenizerTest.test_add_tokens_tokenizer
Wav2Vec2PhonemeCTCTokenizerTest.test_added_token_are_matched_longest_first
Wav2Vec2PhonemeCTCTokenizerTest.test_batch_encode_plus_batch_sequence_length
Wav2Vec2PhonemeCTCTokenizerTest.test_batch_encode_plus_overflowing_tokens
Wav2Vec2PhonemeCTCTokenizerTest.test_batch_encode_plus_padding
Wav2Vec2PhonemeCTCTokenizerTest.test_call
Wav2Vec2PhonemeCTCTokenizerTest.test_case_insensitive
Wav2Vec2PhonemeCTCTokenizerTest.test_change_phonemizer_lang
Wav2Vec2PhonemeCTCTokenizerTest.test_encode
Wav2Vec2PhonemeCTCTokenizerTest.test_encode_decode
Wav2Vec2PhonemeCTCTokenizerTest.test_encode_decode_with_del
Wav2Vec2PhonemeCTCTokenizerTest.test_encode_decode_with_del_filter
Wav2Vec2PhonemeCTCTokenizerTest.test_encode_plus_with_padding
Wav2Vec2PhonemeCTCTokenizerTest.test_encode_with_del
```
None of these models - PyTorch Wav2Vec2 Phoneme, TFOpenAIGPT, TFMBart, TFGPTJ, TFRemBert, TFSwin should have been affected by the PR #21502 which has cause the recent memory issues with Hubert and Wav2Vec2. I'm therefore unsure if resolving upstream issues with wav2vec2 and hubert will resolve this unfortunately.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21647/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21647",
"html_url": "https://github.com/huggingface/transformers/pull/21647",
"diff_url": "https://github.com/huggingface/transformers/pull/21647.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21647.patch",
"merged_at": 1676476851000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21646
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21646/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21646/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21646/events
|
https://github.com/huggingface/transformers/pull/21646
| 1,586,024,859
|
PR_kwDOCUB6oc5KCv3N
| 21,646
|
Fix dynamic module import error
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Run the following commpand\r\n```python\r\npython run_debug.py\r\n```\r\nwith the 2 files\r\n\r\n### run_debug.py\r\n```python\r\nimport os\r\n\r\nfor i in range(300):\r\n print(i)\r\n with open(\"output.txt\", \"a+\") as fp:\r\n fp.write(str(i) + \"\\n\")\r\n os.system(\"python3 debug.py\")\r\n```\r\n(we need to run the debugging code `foo` (contained in file `debug.py`) in difference processes each time, instead of running the script `debug.py` with a for loop defined inside it - as this will be always in the same process)\r\n\r\n### debug.py\r\n```python\r\nimport time, traceback, tempfile, os\r\nfrom transformers.utils import HF_MODULES_CACHE\r\n\r\n\r\ndef foo():\r\n from transformers import AutoModel\r\n\r\n model = AutoModel.from_pretrained(\"hf-internal-testing/test_dynamic_model\", trust_remote_code=True)\r\n # Test model can be reloaded.\r\n with tempfile.TemporaryDirectory() as tmp_dir:\r\n model.save_pretrained(tmp_dir)\r\n try:\r\n reloaded_model = AutoModel.from_pretrained(tmp_dir, trust_remote_code=True)\r\n except Exception as e:\r\n print(e)\r\n with open(\"output.txt\", \"a+\") as fp:\r\n fp.write(f\"{traceback.format_exc()}\" + \"\\n\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n timeout = os.environ.get(\"PYTEST_TIMEOUT\", 10)\r\n timeout = int(timeout)\r\n for i in range(1):\r\n time.sleep(1)\r\n print(i)\r\n with open(\"output.txt\", \"a+\") as fp:\r\n fp.write(str(i) + \"\\n\")\r\n try:\r\n os.system(f'rm -rf \"{HF_MODULES_CACHE}\"')\r\n except:\r\n pass\r\n foo()\r\n print(\"=\" * 80)\r\n with open(\"output.txt\", \"a+\") as fp:\r\n fp.write(\"=\" * 80 + \"\\n\")\r\n```",
"Thanks for working on this! I was going to have a look at it when back from vacation but if you beat me to it ;-)\r\n\r\nMy solution would have been to change the way the local module works: for now I dumb every file there without structure, I wanted to add a folder per model (so given by `pretrained_model_name_or_path`) which would also fix this issue I believe.",
"@sgugger I am open to explore further, but I have a bit doubt regarding\r\n\r\n> I wanted to add a folder per model (so given by `pretrained_model_name_or_path`) which would also fix this issue I believe.\r\n\r\nWhile I am debugging (this single test), the only model appears\r\n\r\n```\r\ntransformers_modules/hf-internal-testing/test_dynamic_model/12345678901234567890.../\r\ntransformers_modules/local/\r\n```\r\nso I don't see multiple models sharing the same folder, but the issue still occurs. So, I am not sure how to proceed with the solution you mentioned above.",
"Hmm, there seems to affect other related tests. I will have to take a look ๐ญ ",
"I believe the conflict is between two files in local being written/deleted concurrently (but I might be wrong) hence making sure we things like\r\n```\r\ntransformers_modules/local/123456...\r\ntransformers_modules/local/777888...\r\n```\r\nmight fix the issue.",
"> I believe the conflict is between two files in local being written/deleted concurrently\r\n\r\nOn (circleci) CI, we have `pytest -n 8`, which might cause the situation you mentioned. But I am debugging by running the following function in a loop (and the issue still appears), so I kinda feel the issue is not from the concurrently read/write/delete operations\r\n\r\n```python\r\ndef foo():\r\n from transformers import AutoModel\r\n\r\n model = AutoModel.from_pretrained(\"hf-internal-testing/test_dynamic_model\", trust_remote_code=True)\r\n # Test model can be reloaded.\r\n with tempfile.TemporaryDirectory() as tmp_dir:\r\n model.save_pretrained(tmp_dir)\r\n reloaded_model = AutoModel.from_pretrained(tmp_dir, trust_remote_code=True)\r\n```\r\n\r\nI could explore anyway - but maybe let me finalize the current PR (make CI green) first ",
"Finally get it:\r\n\r\n- we don't need to remove other files (config, `__init__.py`) or `__pycache__` folder\r\n- the point is: we need to remove the `module_file_name` in a subprocess, then copy it back\r\n - os.system(\"rm -rf ...\") works: as it is in another process\r\n - os.system(f\"python3 -c '{cmd}'\"): same, but we don't use Linux specific command --> way to go\r\n - os.remove(...): not working! I could not explain (as I don't know the reason behind) ๐ข \r\n",
"Don't know why we get an error where a module is not a python file, but a package. See below.\r\nCan't reproduce so far, but the fix works for the auto model dynamic loading test.\r\n\r\n```bash\r\nFAILED tests/models/auto/test_image_processing_auto.py::AutoImageProcessorTest::test_from_pretrained_dynamic_image_processor\r\n\r\n - ModuleNotFoundError: No module named 'transformers_modules.local__tmp_tmpkcj_lb5j'\r\n```",
"This PR is ready for review.\r\n\r\nThere is one failure thtat I can't reproduce with the same code snippet. See [this comment](https://github.com/huggingface/transformers/pull/21646#issuecomment-1433336303). It seems this happens much rarely. And probably we can investigate it if it happens again. \r\n\r\n",
"Thanks for investigating so deeply this issue!"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
### Issue
We have failing test
```bash
FAILED tests/models/auto/test_modeling_auto.py::AutoModelTest::test_from_pretrained_dynamic_model_distant
ModuleNotFoundError: No module named 'transformers_modules.local.modeling'
```
The full trace is given at the end.
After a long debug process, it turns out that, when reloading from the saved model
```python
model = AutoModel.from_pretrained("hf-internal-testing/test_dynamic_model", trust_remote_code=True)
with tempfile.TemporaryDirectory() as tmp_dir:
model.save_pretrained(tmp_dir)
reloaded_model = AutoModel.from_pretrained(tmp_dir, trust_remote_code=True)
```
if `configuration.py` appears in the dynamic module directory (here `transformers_modules/local`), sometimes it interferes the import of `transformers_modules.local.modeling`. I have no clear reason for this situation however.
### What this PR fixes
This PR therefore tries to avoid the appearance of other module files while the code imports a specific module file, around this line
```
def get_class_in_module():
...
module = importlib.import_module(module_path)
...
```
### Result
Running the reproduce code snippet (provided in the comment below) in a loop of 300 times:
- with this PR: this issue doesn't appear, [job run](https://app.circleci.com/pipelines/github/huggingface/transformers/57824/workflows/fb3b74ed-9231-41f8-80c9-7d43fb871a35/jobs/702257/steps)
- without the fix: this issue appears with 50% probability [job run](https://app.circleci.com/pipelines/github/huggingface/transformers/57826/workflows/1dee1636-9aa3-433c-a044-0c4c4c9dcbff/jobs/702277/steps)
#### Full traceback
```bash
Traceback (most recent call last):
...
reloaded_model = AutoModel.from_pretrained(tmp_dir, trust_remote_code=True)
File "/home/circleci/.pyenv/versions/3.7.12/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 463, in from_pretrained
pretrained_model_name_or_path, module_file + ".py", class_name, **hub_kwargs, **kwargs
File "/home/circleci/.pyenv/versions/3.7.12/lib/python3.7/site-packages/transformers/dynamic_module_utils.py", line 367, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(".py", ""))
File "/home/circleci/.pyenv/versions/3.7.12/lib/python3.7/site-packages/transformers/dynamic_module_utils.py", line 147, in get_class_in_module
module = importlib.import_module(module_path)
File "/home/circleci/.pyenv/versions/3.7.12/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'transformers_modules.local.modeling'
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21646/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21646",
"html_url": "https://github.com/huggingface/transformers/pull/21646",
"diff_url": "https://github.com/huggingface/transformers/pull/21646.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21646.patch",
"merged_at": 1676665360000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21645
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21645/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21645/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21645/events
|
https://github.com/huggingface/transformers/issues/21645
| 1,586,008,723
|
I_kwDOCUB6oc5eiJKT
| 21,645
|
load_module will be removed in Python 3.12
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for pointing this out! Will have a look."
] | 1,676
| 1,676
| 1,676
|
MEMBER
| null |
Several of our scripts call `spec.loader.load_module()`. Although this mostly affects standalone utils scripts, it's also called at the top level of `processing_utils.py`, which will be executed by all `Processor` classes.
`load_module()` has been deprecated for a while and will be fully deleted in Python 3.12, which is entering beta soon. We need to replace this code or the library will not be usable in Py3.12.
I can investigate and try to find a suitable replacement when I have time, but if anyone is more familiar with that code and can think of an obvious replacement that achieves the same goal, let me know!
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21645/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21644
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21644/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21644/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21644/events
|
https://github.com/huggingface/transformers/issues/21644
| 1,585,928,895
|
I_kwDOCUB6oc5eh1q_
| 21,644
|
Mask2Former - ValueError: cost matrix is infeasible
|
{
"login": "asgerius",
"id": 44878204,
"node_id": "MDQ6VXNlcjQ0ODc4MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/44878204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asgerius",
"html_url": "https://github.com/asgerius",
"followers_url": "https://api.github.com/users/asgerius/followers",
"following_url": "https://api.github.com/users/asgerius/following{/other_user}",
"gists_url": "https://api.github.com/users/asgerius/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asgerius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asgerius/subscriptions",
"organizations_url": "https://api.github.com/users/asgerius/orgs",
"repos_url": "https://api.github.com/users/asgerius/repos",
"events_url": "https://api.github.com/users/asgerius/events{/privacy}",
"received_events_url": "https://api.github.com/users/asgerius/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Cc @alaradirik ",
"Hi @asgerius, thanks for opening the issue, I'm looking into this and will try to replicate the error first",
"Friendly ping @alaradirik :) ",
"Hi @asgerius, I tried fine-tuning Mask2Former on the semantic segmentation subset of the Scene Parsing dataset and couldn't replicate the issue. \r\n\r\nIs it possible that you are using a buggy version of scipy (the bug in scipy.optimize.linear_sum_assignment is fixed in this [PR](https://github.com/scipy/scipy/pull/7031)) or there are issues with the data preprocessing? \r\n\r\nYou can refer to [Fine-tuning MaskFormer](https://pyimagesearch.com/2023/03/13/train-a-maskformer-segmentation-model-with-hugging-face-transformers/) blog post (exactly the same steps for Mask2Former) on PyImageSearch. I can take another look if the issue still persists but I'd need a minimal reproducible example to pinpoint the exact issue.",
"Hi\r\nI put together a small example (see attached), and the results are somewhat contradictory to what I wrote in the original post. The error does indeed seem to be caused by diverging loss, which often recovers after a few batches. If I implement the described fix, the code does not crash, but instead just produces a high loss. However, I have never seen this behavior in my actual project, where the loss remains well-behaved when the fix is implemented. The only major difference is the data source, as the data in this example is simply random noise. I should also mention that my use case is two-class semantic segmentation.\r\n\r\nFurther, amp seems to be another important factor in addition to the learning rate. All my trainings have been run with it enabled. If disable in the example (controlled by the `use_amp` variable), the error becomes significantly harder to reproduce, indicating to me that the error is caused by overflowing floats.\r\n\r\nTo run the code, just put the files in the same directory and run `python example.py`. My versions of the dependencies are `torch==1.13.1 numpy==1.24.2 scipy==1.10.0 transformers==4.26.0`. With `use_amp = True` and `lr = 1e-4`, I usually get the error within the first 10-20 batches.\r\n\r\nI changed the file types to .txt, as github does not allow .py and .json as attachments, so you'll have to change them back.\r\n[example.txt](https://github.com/huggingface/transformers/files/11040427/example.txt)\r\n[facebook.mask2former-swin-small-ade-semantic.config.txt](https://github.com/huggingface/transformers/files/11040428/facebook.mask2former-swin-small-ade-semantic.config.txt)\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@alaradirik I am facing the same issue, where I am using MaskFormer \r\n```\r\n\r\n 914 cost_matrix = self.cost_mask * cost_mask + self.cost_class * cost_class + self.cost_dice * cost_dice\r\n 915 # do the assigmented using the hungarian algorithm in scipy\r\n--> 916 assigned_indices: Tuple[np.array] = linear_sum_assignment(cost_matrix.cpu())\r\n 917 indices.append(assigned_indices)\r\n 919 # It could be stacked in one tensor\r\n\r\nValueError: cost matrix is infeasible\r\n```\r\nI was following @NielsRogge tutorial for fine tuning on semantic masks where I only changed training to :\r\n\r\n```\r\n with torch.cuda.amp.autocast():\r\n outputs = model(\r\n pixel_values=data[\"pixel_values\"].to(device),\r\n mask_labels=[labels.to(device) for labels in data[\"mask_labels\"]],\r\n class_labels=[labels.to(device) for labels in data[\"class_labels\"]],\r\n )\r\n loss = outputs.loss\r\n\r\noptimizer.zero_grad()\r\nscaler.scale(loss).backward()\r\nscaler.step(optimizer)\r\nscaler.update()\r\n```\r\nand my ground truths are only 0s and 1s i.e. binary masks\r\n@alaradirik I don't know how to replicate the issue as it don't know the real cause just occurs sometimes.",
"Small update: I have also seen the error during inference (amp enabled) when running on my trained model. However, this seems to be incredibly rare, as I have only ever experienced it once. I have not seen it after I implemented the fix described above in the inference code."
] | 1,676
| 1,687
| 1,684
|
NONE
| null |
### System Info
```
- `transformers` version: 4.26.0
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.27
- Python version: 3.10.9
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 4 x RTX 2080 Ti
- Using distributed or parallel set-up in script?: Single node DistributedDataParallel setup
```
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am fine-tuning Mask2Former for a semantic segmentation task. I sometimes get the error `ValueError: cost matrix is infeasible`. With the learning rate of `1e-4` that I use, it usually takes many thousands of batches with the loss happily dropping to get this error. From my experience, the higher the learning rate, the more often it will happen. The error happens during the forward pass, which roughly looks like this:
```py
image_processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-ade-semantic")
inputs = image_processor.preprocess(batch, mask_labels, return_tensors="pt")
batch, mask, class_labels = inputs["pixel_values"], inputs["mask_labels"], inputs["class_labels"]
batch = batch.to(device)
mask = [x.to(device) for x in mask]
class_labels = [x.to(device) for x in class_labels]
with torch.cuda.amp.autocast():
out = model(
pixel_values = batch,
mask_labels = mask,
class_labels = class_labels,
)
```
Unfortunately, I don't have the time to set up a full minimally reproducible example, but I have tracked the error down to `cost_matrix` containing `torch.inf` [here](https://github.com/huggingface/transformers/blob/48327c57182fdade7f7797d1eaad2d166de5c55b/src/transformers/models/mask2former/modeling_mask2former.py#L491) (or see stack trace below).
Full stack trace
```
Traceback (most recent call last):
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/path/to/my/code/train/run.py", line 36, in _train_wrapper
train(rank, world_size, job, dgpm)
File "/path/to/my/code/train/train.py", line 219, in train
out = model(
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1040, in forward
output = self._run_ddp_forward(*inputs, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1000, in _run_ddp_forward
return module_to_run(*inputs[0], **kwargs[0])
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path/to/my/code/model/mask2former.py", line 35, in forward
return self.model(pixel_values=pixel_values, mask_labels=mask_labels, class_labels=class_labels)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/transformers/models/mask2former/modeling_mask2former.py", line 2464, in forward
loss_dict = self.get_loss_dict(
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/transformers/models/mask2former/modeling_mask2former.py", line 2351, in get_loss_dict
loss_dict: Dict[str, Tensor] = self.criterion(
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/transformers/models/mask2former/modeling_mask2former.py", line 792, in forward
indices = self.matcher(masks_queries_logits, class_queries_logits, mask_labels, class_labels)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/path/to/my/amazing/virtual/environment/lib/python3.10/site-packages/transformers/models/mask2former/modeling_mask2former.py", line 496, in forward
assigned_indices: Tuple[np.array] = linear_sum_assignment(cost_matrix.cpu())
ValueError: cost matrix is infeasible
```
### Expected behavior
From what I can tell, this is not expected behavior and is caused by how `scipy.optimize.linear_sum_assignment` handles infinite values. Replacing these with very large numbers seems to fix the issue, as proposed [here](https://stackoverflow.com/questions/42035999/why-does-linear-sum-assignment-in-scipy-optimize-never-return-if-one-of-the-assi) (though for a slightly different issue). This is achieved by adding the following two lines above the call to `linear_sum_assignment` in the line linked earlier.
```py
cost_matrix = torch.minimum(cost_matrix, torch.tensor(1e10))
cost_matrix = torch.maximum(cost_matrix, torch.tensor(-1e10))
```
I have found that the error becomes more common as the learning rate is increased, so it could be related to diverging loss. However, I first discovered the error using the same learning rate as in the Mask2Former paper, `1e-4`, so I would not expect this to be too high, especially since it had happily chugged along for over 11k batches with dropping or steady loss before throwing the error.
If this is expected behavior, I at least think the error message should be improved.
Edit: I have fixed this locally by overwriting the `Mask2FormerHungarianMatcher` class with the aforementioned fix. I have not seen any diverging loss since then over many runs of thousands of epochs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21644/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21644/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21643
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21643/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21643/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21643/events
|
https://github.com/huggingface/transformers/pull/21643
| 1,585,816,422
|
PR_kwDOCUB6oc5KCCz9
| 21,643
|
Skip wav2vec2 hubert high mem tests
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
Skips some more troublesome Hubert and Wav2Vec2 tests. `dataset_conversion` for Hubert is now occasionally causing errors - see [this comment](https://github.com/huggingface/transformers/pull/20725#issuecomment-1430442813) until resolved. Skipping all tests that I've seen hit high memory and have changed because of #21502.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21643/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21643",
"html_url": "https://github.com/huggingface/transformers/pull/21643",
"diff_url": "https://github.com/huggingface/transformers/pull/21643.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21643.patch",
"merged_at": 1676470647000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21642
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21642/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21642/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21642/events
|
https://github.com/huggingface/transformers/issues/21642
| 1,585,800,137
|
I_kwDOCUB6oc5ehWPJ
| 21,642
|
ONNX export fails for TFSegformerForSemanticSegmentation
|
{
"login": "OutSorcerer",
"id": 5833256,
"node_id": "MDQ6VXNlcjU4MzMyNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5833256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OutSorcerer",
"html_url": "https://github.com/OutSorcerer",
"followers_url": "https://api.github.com/users/OutSorcerer/followers",
"following_url": "https://api.github.com/users/OutSorcerer/following{/other_user}",
"gists_url": "https://api.github.com/users/OutSorcerer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OutSorcerer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OutSorcerer/subscriptions",
"organizations_url": "https://api.github.com/users/OutSorcerer/orgs",
"repos_url": "https://api.github.com/users/OutSorcerer/repos",
"events_url": "https://api.github.com/users/OutSorcerer/events{/privacy}",
"received_events_url": "https://api.github.com/users/OutSorcerer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sayakpaul, any idea what the cause might be?",
"Maybe this issue is better suited for https://github.com/onnx/tensorflow-onnx\r\n\r\nCould you try downgrading the `tf2onnx` version to 1.11.1?",
"@sayakpaul \r\n\r\n>Could you try downgrading the tf2onnx version to 1.11.1?\r\n\r\nI tried this, unfortunately there is still an error that `PartitionedCall` is not supported.\r\n\r\nI also tried to downgrade tensorflow and ONNX export worked with `tensorflow==2.8.4`, here is an example: https://colab.research.google.com/gist/OutSorcerer/c8cd27a455091b57d9ea90ab3450035e/tfsegformer_onnx.ipynb\r\n\r\n>Maybe this issue is better suited for https://github.com/onnx/tensorflow-onnx\r\n\r\nThere are already issues there about `PartitionedCall` support e.g. https://github.com/onnx/tensorflow-onnx/issues/1864. \r\n\r\nHowever, since export works with a previous version of TensorFlow, it seems that `PartitionedCall` operation is not essential for a model to work. This is a low-level operation automatically added by TensorFlow and another workaround with new versions of TensorFlow could be to disable its insertion into an operation graph, but I was not able to quickly find a way to do it.\r\n\r\nAlso, regardless of the error message printed an ONNX file is still generated (which obviously fails at inference time), so yet another workaround could be to remove `PartitionedCall`s from an ONNX file.\r\n",
"Thanks for investigating. With your workaround, does the model work during inference as expected?\r\n\r\nIf so, I guess we can safely close the issue here? ",
">Thanks for investigating. With your workaround, does the model work during inference as expected?\r\n\r\nYes, I rerun the cells that were comparing outputs of a TF model and an ONNX model and the outputs match.\r\n\r\n>If so, I guess we can safely close the issue here?\r\n\r\nWell, from my perspective ideally one of workarounds would be applied in `transformers` and `TFSegformerForSemanticSegmentation` would work with the most recent releases of TF and other packages, but I also understand that eventually `tf2onnx` developers should do something with `PartitionedCall` export and this issue would be solved too.\r\n",
"In fact, `PartitionedCall` may not be the root cause of the problem.\r\n\r\nI looked at the ONNX file produced with TF 2.11.0 [in the notebook above](https://github.com/deep-diver/segformer-tf-transformers/blob/main/notebooks/TFSegFormer_ONNX.ipynb) by doing\r\n\r\n```\r\nonnx_model = onnx.load(onnx_model_path)\r\nwith open(\"model.txt\", \"w\") as f:\r\n f.write(str(onnx_model))\r\n```\r\n\r\nIt has the following node\r\n\r\n```\r\nnode {\r\n input: \"tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.0/mlp/dwconv/Reshape:0\"\r\n input: \"tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.0/mlp/dwconv/dwconv/ReadVariableOp:0\"\r\n output: \"tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.0/mlp/dwconv/dwconv/PartitionedCall:0\"\r\n name: \"tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.0/mlp/dwconv/dwconv/PartitionedCall\"\r\n op_type: \"PartitionedCall\"\r\n ...\r\n attribute {\r\n name: \"f\"\r\n s: \"__inference__jit_compiled_convolution_op_6171\"\r\n type: STRING\r\n }\r\n }\r\n ```\r\n \r\nThe issue is that node `__inference__jit_compiled_convolution_op_6171` is referenced, but its definition is nowhere to be found. So likely tf2onnx failed to convert that operation at the first place.\r\n\r\nThere was a similar issue, where one of tf2onnx contributors [said](https://github.com/onnx/tensorflow-onnx/issues/1093#issuecomment-707239545):\r\n\r\n>StatefulPartitionedCall is an op that does a simple function call in TF. Our converter doesn't normally have to deal with it since the optimizer we run before conversion automatically inlines most function calls. If it shows up in the optimized graph there is usually some reason that will prevent conversion from working. \r\n\r\nI created an issue with the details above in tf2onnx GitHub: https://github.com/onnx/tensorflow-onnx/issues/2127\r\n ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Additionally, the versions of some relevant packages are
```
transformers @ git+https://github.com/huggingface/transformers@762dda44deed29baab049aac5324b49f134e7536
onnx==1.13.0
onnxruntime==1.14.0
tf2onnx==1.13.0
```
### Who can help?
@gante, @Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Run this notebook (https://github.com/deep-diver/segformer-tf-transformers/blob/main/notebooks/TFSegFormer_ONNX.ipynb) in Colab. ONNX export apparently worked there as of July 25 2022, but it fails now.
The error message is
```
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.0.0/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.0.1/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.0.2/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.0/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.1/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.2/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.3/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.4/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.1.5/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.0/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.1/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.2/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.3/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.4/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.5/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.6/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.7/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.8/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.9/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.10/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.11/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.12/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.13/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.14/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.15/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.16/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.17/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.18/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.19/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.20/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.21/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.22/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.23/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.24/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.25/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.26/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.27/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.28/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.29/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.30/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.31/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.32/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.33/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.34/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.35/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.36/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.37/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.38/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.2.39/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.3.0/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.3.1/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [tf_segformer_for_semantic_segmentation/segformer/encoder/block.3.2/mlp/dwconv/dwconv/PartitionedCall: PartitionedCall] is not supported
ERROR:tf2onnx.tfonnx:Unsupported ops: Counter({'PartitionedCall': 52})
```
### Expected behavior
ONNX export of TFSegformerForSemanticSegmentation works.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21642/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21641
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21641/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21641/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21641/events
|
https://github.com/huggingface/transformers/issues/21641
| 1,585,794,983
|
I_kwDOCUB6oc5ehU-n
| 21,641
|
Confusing documentation in T5
|
{
"login": "seanmor5",
"id": 14100120,
"node_id": "MDQ6VXNlcjE0MTAwMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/14100120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seanmor5",
"html_url": "https://github.com/seanmor5",
"followers_url": "https://api.github.com/users/seanmor5/followers",
"following_url": "https://api.github.com/users/seanmor5/following{/other_user}",
"gists_url": "https://api.github.com/users/seanmor5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seanmor5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seanmor5/subscriptions",
"organizations_url": "https://api.github.com/users/seanmor5/orgs",
"repos_url": "https://api.github.com/users/seanmor5/repos",
"events_url": "https://api.github.com/users/seanmor5/events{/privacy}",
"received_events_url": "https://api.github.com/users/seanmor5/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @ArthurZucker and @younesbelkada ",
"The T5 behaviour is correct, and as pointed out the doc is probably not! I'll open a PR to fix this ๐๐ป thanks "
] | 1,676
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### System Info
latest
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Not sure if this is a bug exactly, but the way the documentation reads for T5 doesn't seem correct in the context of Flan-T5. Specifically, the configuration parameter `d_kv` it states:
```
d_kv (int, optional, defaults to 64) โ Size of the key, query, value projections per attention head. d_kv has to be equal to d_model // num_heads.
```
However if you look at `flan-t5-small`, d_kv is 64 while the hidden size is 512 and the number of heads is 6, which obviously doesn't hold up with the statement in the docs.
Is the T5 behavior correct as is? Are the docs just wrong?
### Expected behavior
Looking at the implementation of T5 it seems generic w.r.t to `d_kv`, `num_heads`, and `d_model` and `inner_dim` is actually the size of the k/v/q projection
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21641/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21640
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21640/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21640/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21640/events
|
https://github.com/huggingface/transformers/pull/21640
| 1,585,769,197
|
PR_kwDOCUB6oc5KB4mX
| 21,640
|
[WIP] Move X-MOD models to facebook organization
|
{
"login": "jvamvas",
"id": 5830820,
"node_id": "MDQ6VXNlcjU4MzA4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5830820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jvamvas",
"html_url": "https://github.com/jvamvas",
"followers_url": "https://api.github.com/users/jvamvas/followers",
"following_url": "https://api.github.com/users/jvamvas/following{/other_user}",
"gists_url": "https://api.github.com/users/jvamvas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jvamvas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvamvas/subscriptions",
"organizations_url": "https://api.github.com/users/jvamvas/orgs",
"repos_url": "https://api.github.com/users/jvamvas/repos",
"events_url": "https://api.github.com/users/jvamvas/events{/privacy}",
"received_events_url": "https://api.github.com/users/jvamvas/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
As discussed in https://github.com/huggingface/transformers/pull/20939, the new models https://huggingface.co/jvamvas/xmod-base etc. should be moved to the [facebook](https://huggingface.co/facebook) organization.
This PR changes the hardcoded model names in the code.
Next steps:
- [ ] Someone please add me to the facebook org
- [ ] I move the models
- [ ] PR can be merged
- [ ] I leave the facebook org
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/pull/20939
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21640/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21640",
"html_url": "https://github.com/huggingface/transformers/pull/21640",
"diff_url": "https://github.com/huggingface/transformers/pull/21640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21640.patch",
"merged_at": 1676557106000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21639
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21639/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21639/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21639/events
|
https://github.com/huggingface/transformers/issues/21639
| 1,585,737,843
|
I_kwDOCUB6oc5ehHBz
| 21,639
|
DataCollatorForTokenClassification pads labels incorrectly for LukeModel
|
{
"login": "SuijkerbuijkP",
"id": 60673023,
"node_id": "MDQ6VXNlcjYwNjczMDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/60673023?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuijkerbuijkP",
"html_url": "https://github.com/SuijkerbuijkP",
"followers_url": "https://api.github.com/users/SuijkerbuijkP/followers",
"following_url": "https://api.github.com/users/SuijkerbuijkP/following{/other_user}",
"gists_url": "https://api.github.com/users/SuijkerbuijkP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuijkerbuijkP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuijkerbuijkP/subscriptions",
"organizations_url": "https://api.github.com/users/SuijkerbuijkP/orgs",
"repos_url": "https://api.github.com/users/SuijkerbuijkP/repos",
"events_url": "https://api.github.com/users/SuijkerbuijkP/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuijkerbuijkP/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Did you tried the `DataCollatorForLukeTokenClassification` :thinking: \r\n\r\nFor LUKE and NER there's this demo project available:\r\n\r\nhttps://github.com/huggingface/transformers/tree/main/examples/research_projects/luke",
"Thanks, I did not know that exists, as it is not part of the normal datacollators. Good chance that that works.\r\n\r\nAny chance this will become part of the transformers library? Instead of just importing it we have to copy over the utils file. ",
"No this is too specific to be in the library proper.",
"The data collator in the script pointed out by @stefan-it works, when removing the original_entity_spans part of that collator that is not used or outputted by the LukeTokenizer. So thanks for that! \r\n\r\nIs it too specific? All the other luke parts are available as imports, having this one thing only available as a custom code import feels as a lost opportunity. \r\n\r\nAnyways, thanks for the quick help!",
"Keep in mind that the Transformers library is primarily a library of models, not data collators ;-)",
"Good point, you tend to forget that when the whole solution works so seamlessly :) I guess I was spoiled ;)"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0
- Platform: Linux-4.18.0-348.12.2.el8_5.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using a LUKE based workflow with my own text dataset. This works perfectly fine using a standard workflow (e.g. the pretrained luke-large-finetuned-conll-2003 LukeTokenizer with padding and truncation in combination with a standard Trainer instance), but when trying to implement dynamic padding strange behavior was observed. Still using the same tokenizer, but now only with truncation, and then using the DataCollatorForTokenClassification to pad batches during Trainer.train(), the batch size was reportedly wrong (ValueError: Expected input batch_size (30) to match target batch_size (46)).
When comparing the output of the working workflow with padding and truncation in the Tokenizer, it is observed that the labels, entity_ids, entity_position_ids, entity_start_positions, entity_end_positions and entity_attention_mask are of the same size, and the input_ids and attention_mask are also of the same (but possibly different) size. As mentioned, this works.
When checking the output after the DataCollatorForTokenClassification, the labels are the same size as the input_ids and attention_mask. This is incorrect for the selected tokenizer, and makes it such that the mentioned error is given.
### Expected behavior
labels are padded according to the entity_ids, not according to the input_ids.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21639/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21638
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21638/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21638/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21638/events
|
https://github.com/huggingface/transformers/issues/21638
| 1,585,619,407
|
I_kwDOCUB6oc5egqHP
| 21,638
|
CLIP image processor fails when resizing a 1x1 image
|
{
"login": "justinpinkney",
"id": 605492,
"node_id": "MDQ6VXNlcjYwNTQ5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/605492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justinpinkney",
"html_url": "https://github.com/justinpinkney",
"followers_url": "https://api.github.com/users/justinpinkney/followers",
"following_url": "https://api.github.com/users/justinpinkney/following{/other_user}",
"gists_url": "https://api.github.com/users/justinpinkney/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justinpinkney/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justinpinkney/subscriptions",
"organizations_url": "https://api.github.com/users/justinpinkney/orgs",
"repos_url": "https://api.github.com/users/justinpinkney/repos",
"events_url": "https://api.github.com/users/justinpinkney/events{/privacy}",
"received_events_url": "https://api.github.com/users/justinpinkney/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for raising this issue @justinpinkney and for the detailed snippet and trackback! \r\n\r\nIndeed, you're right, the issue is arising from trying to infer the image channel dimension. As it's possible to have images with a single channel, and images with 3 channels, an image with shape `(1, 1, 3)` could be either a 1x3 single channel image or 1x1 3 channel image. This ambiguity in dimensions causes many issues and it's one that I'm currently trying to address.\r\n\r\nDepending on the input data format you're feeding to the image processor (torch/pil/tf/np/jax and batched/single image/list of images), the fastest way around this would be tiling the pixels to create a compatible shape e.g. 2x2x3 image, as this will result in the same image after resizing as the original 1x1. However, this is quite hacky and the bug will still persist in cases when the dimensions cannot be confidently inferred e.g. a 3x3x3 image. \r\n\r\nI'll make sure to keep this issue updated with changes to the code to address this. ",
"I was hoping I could specify the data format using the `data_format` argument, but that turned out to be just for the output images, not specifying the inputs. In my case these 1xn and 1x1 images were just bad samples, so I could filter them out in the data loading pipeline.\r\n\r\nThanks for the quick response though!",
"Yes, at the moment it just controls the output format. I think being able to specify the input data format is a good solution however! I'll draft something up :) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,693
| 1,693
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following:
```python
from PIL import Image
import requests
from transformers import AutoProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image = image.resize((1,1))
print(image.mode, image.size)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
The issue appears to be caused by `infer_channel_dimension_format(image)` which for a numpy array of shape 1x1x3 corresponding to a 1x1 rgb image, the return value is `<ChannelDimension.FIRST: 'channels_first'>` for the input data which is incorrect in this case.
Gives the error:
```
Traceback (most recent call last):
File "hf_bug.py", line 15, in <module>
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
File "/some/path/.venv/lib/python3.7/site-packages/transformers/models/clip/processing_clip.py", line 102, in __call__
image_features = self.image_processor(images, return_tensors=return_tensors, **kwargs)
File "/some/path/.venv/lib/python3.7/site-packages/transformers/image_processing_utils.py", line 446, in __call__
return self.preprocess(images, **kwargs)
File "/some/path/.venv/lib/python3.7/site-packages/transformers/models/clip/image_processing_clip.py", line 327, in preprocess
images = [self.normalize(image=image, mean=image_mean, std=image_std) for image in images]
File "/some/path/.venv/lib/python3.7/site-packages/transformers/models/clip/image_processing_clip.py", line 327, in <listcomp>
images = [self.normalize(image=image, mean=image_mean, std=image_std) for image in images]
File "/some/path/.venv/lib/python3.7/site-packages/transformers/models/clip/image_processing_clip.py", line 211, in normalize
return normalize(image, mean=mean, std=std, data_format=data_format, **kwargs)
File "/some/path/.venv/lib/python3.7/site-packages/transformers/image_transforms.py", line 334, in normalize
raise ValueError(f"mean must have {num_channels} elements if it is an iterable, got {len(mean)}")
ValueError: mean must have 1 elements if it is an iterable, got 3
```
### Expected behavior
the 1x1 input image should be resized to 224x224
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21638/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21638/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21637
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21637/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21637/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21637/events
|
https://github.com/huggingface/transformers/pull/21637
| 1,585,505,957
|
PR_kwDOCUB6oc5KA_qQ
| 21,637
|
Fix Blip-2 CI again
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@gante Don't worry. You added 2 new tests, and we just need to use FP16 (for those 2 new tests) to avoid GPU OOM. The original 2 tests are not undo by your PR.",
"Oh, I see! haha that makes more sense now :)"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
The fix added in #21566 have to be applied to a later merged commit #21624
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21637/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21637",
"html_url": "https://github.com/huggingface/transformers/pull/21637",
"diff_url": "https://github.com/huggingface/transformers/pull/21637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21637.patch",
"merged_at": 1676455182000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21636
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21636/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21636/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21636/events
|
https://github.com/huggingface/transformers/pull/21636
| 1,585,417,212
|
PR_kwDOCUB6oc5KAsl2
| 21,636
|
Pass parent exception as context exception to provide clearer stack trace
|
{
"login": "balvisio",
"id": 1909351,
"node_id": "MDQ6VXNlcjE5MDkzNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1909351?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balvisio",
"html_url": "https://github.com/balvisio",
"followers_url": "https://api.github.com/users/balvisio/followers",
"following_url": "https://api.github.com/users/balvisio/following{/other_user}",
"gists_url": "https://api.github.com/users/balvisio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/balvisio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balvisio/subscriptions",
"organizations_url": "https://api.github.com/users/balvisio/orgs",
"repos_url": "https://api.github.com/users/balvisio/repos",
"events_url": "https://api.github.com/users/balvisio/events{/privacy}",
"received_events_url": "https://api.github.com/users/balvisio/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Passes the parent exception as the context exception so that it is clearer what was the original cause of the exception. Currently the message has the confusing: "During handling of the above exception, another exception occurred:
"
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
@ArthurZucker small improvement
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21636/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21636",
"html_url": "https://github.com/huggingface/transformers/pull/21636",
"diff_url": "https://github.com/huggingface/transformers/pull/21636.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21636.patch",
"merged_at": 1676478842000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21635
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21635/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21635/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21635/events
|
https://github.com/huggingface/transformers/issues/21635
| 1,585,133,061
|
I_kwDOCUB6oc5eezYF
| 21,635
|
How to finetune mt0-xxl-mt(13B parameters) seq2seq_qa with deepspeed
|
{
"login": "NiushanDong",
"id": 34929731,
"node_id": "MDQ6VXNlcjM0OTI5NzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/34929731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NiushanDong",
"html_url": "https://github.com/NiushanDong",
"followers_url": "https://api.github.com/users/NiushanDong/followers",
"following_url": "https://api.github.com/users/NiushanDong/following{/other_user}",
"gists_url": "https://api.github.com/users/NiushanDong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NiushanDong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NiushanDong/subscriptions",
"organizations_url": "https://api.github.com/users/NiushanDong/orgs",
"repos_url": "https://api.github.com/users/NiushanDong/repos",
"events_url": "https://api.github.com/users/NiushanDong/events{/privacy}",
"received_events_url": "https://api.github.com/users/NiushanDong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to help debug your code as we keep issues for bugs and feature requests only.",
"Oh sorry, I will close this issue and move to forums.",
"I closed this issue because this is not about bugs and feature requests"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
```shell
I tried to finetune mt0-xxl-mt with the script examples/pytorch/question-answering/run_qa_seq2seq_qa.py,the machine has 8 x V100(32GB) GPU and 250GB CPU memory, but failed with OOM. Anyone can help me?
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
this is the command:
`deepspeed --num_gpus 8 run_seq2seq_qa.py --model_name_or_path bigscience/mt0-xxl-mt --output_dir ./output --dataset_name squad_v2 --context_column context --question_column question --answer_column answers
--do_train --auto_find_batch_size --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 512 --deepspeed ./ds_config.json`
this is the ds_config.json:
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"sub_group_size": 1e9,
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
### Expected behavior
```shell
Time to load utils op: 0.0003857612609863281 seconds
[2023-02-15 11:00:40,462] [INFO] [utils.py:831:see_memory_usage] DeepSpeedZeRoOffload initialize [begin]
[2023-02-15 11:00:40,463] [INFO] [utils.py:832:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
[2023-02-15 11:00:40,463] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 226.3 GB, percent = 90.0%
Parameter Offload: Total persistent parameters: 503808 in 124 params
[2023-02-15 11:00:40,595] [INFO] [utils.py:831:see_memory_usage] DeepSpeedZeRoOffload initialize [end]
[2023-02-15 11:00:40,595] [INFO] [utils.py:832:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
[2023-02-15 11:00:40,596] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 226.32 GB, percent = 90.0%
[2023-02-15 11:00:40,700] [INFO] [utils.py:831:see_memory_usage] Before creating fp16 partitions
[2023-02-15 11:00:40,701] [INFO] [utils.py:832:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
[2023-02-15 11:00:40,702] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 226.33 GB, percent = 90.0%
[2023-02-15 11:00:43,709] [INFO] [utils.py:831:see_memory_usage] After creating fp16 partitions: 12
[2023-02-15 11:00:43,710] [INFO] [utils.py:832:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
[2023-02-15 11:00:43,711] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 226.3 GB, percent = 90.0%
[2023-02-15 11:00:43,807] [INFO] [utils.py:831:see_memory_usage] Before creating fp32 partitions
[2023-02-15 11:00:43,807] [INFO] [utils.py:832:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
[2023-02-15 11:00:43,808] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 226.32 GB, percent = 90.0%
Traceback (most recent call last):
File "/data/yckj1358/projects/nlg/jobs/mt0-xxl-mt/../..//tools/hf_run_seq2seq_qa.py", line 767, in <module>
main()
File "/data/yckj1358/projects/nlg/jobs/mt0-xxl-mt/../..//tools/hf_run_seq2seq_qa.py", line 703, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/data/yckj1358/.virtualenvs/transformers-pytorch-gpu/lib/python3.9/site-packages/transformers-4.27.0.dev0-py3.9.egg/transformers/trainer.py", line 1571, in train
return inner_training_loop(
File "/data/yckj1358/.virtualenvs/transformers-pytorch-gpu/lib/python3.9/site-packages/accelerate/utils/memory.py", line 122, in decorator
raise RuntimeError("No executable batch size found, reached zero.")
RuntimeError: No executable batch size found, reached zero.
[2023-02-15 11:01:24,284] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 115133
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21635/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21634
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21634/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21634/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21634/events
|
https://github.com/huggingface/transformers/pull/21634
| 1,584,825,644
|
PR_kwDOCUB6oc5J-tbf
| 21,634
|
Remove extra "`max_length` is reached." from InfNaNLogitsProcessor documentation
|
{
"login": "mmcdermott",
"id": 470751,
"node_id": "MDQ6VXNlcjQ3MDc1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/470751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmcdermott",
"html_url": "https://github.com/mmcdermott",
"followers_url": "https://api.github.com/users/mmcdermott/followers",
"following_url": "https://api.github.com/users/mmcdermott/following{/other_user}",
"gists_url": "https://api.github.com/users/mmcdermott/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmcdermott/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmcdermott/subscriptions",
"organizations_url": "https://api.github.com/users/mmcdermott/orgs",
"repos_url": "https://api.github.com/users/mmcdermott/repos",
"events_url": "https://api.github.com/users/mmcdermott/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmcdermott/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Remove extra "`max_length` is reached." from InfNaNLogitsProcessor documentation
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21634/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21634",
"html_url": "https://github.com/huggingface/transformers/pull/21634",
"diff_url": "https://github.com/huggingface/transformers/pull/21634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21634.patch",
"merged_at": 1676409143000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21633
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21633/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21633/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21633/events
|
https://github.com/huggingface/transformers/issues/21633
| 1,584,705,736
|
I_kwDOCUB6oc5edLDI
| 21,633
|
Add SwinIR for Image Super Resolution
|
{
"login": "asrimanth",
"id": 30816357,
"node_id": "MDQ6VXNlcjMwODE2MzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/30816357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asrimanth",
"html_url": "https://github.com/asrimanth",
"followers_url": "https://api.github.com/users/asrimanth/followers",
"following_url": "https://api.github.com/users/asrimanth/following{/other_user}",
"gists_url": "https://api.github.com/users/asrimanth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asrimanth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asrimanth/subscriptions",
"organizations_url": "https://api.github.com/users/asrimanth/orgs",
"repos_url": "https://api.github.com/users/asrimanth/repos",
"events_url": "https://api.github.com/users/asrimanth/events{/privacy}",
"received_events_url": "https://api.github.com/users/asrimanth/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Just wanna note that Swin2SR is already integrated, which improves upon SwinIR: https://huggingface.co/docs/transformers/main/model_doc/swin2sr",
"Ohh okay, thank you, I'll close this issue then.",
"Hi @NielsRogge thanks for adding this model to HF.\r\nDo you have in plans to add training process too?"
] | 1,676
| 1,687
| 1,676
|
CONTRIBUTOR
| null |
### Model description
SwinIR: Image Restoration Using Swin Transformer
- This paper presents an Image Super Resolution / Image Restoration model inspired by the SwinTransformer architecture.
- It demonstrates superior performance on various vision tasks including classical/lightweight Image Super Resolution, Image Denoising, and JPEG compression artifact reduction
This issue is primarily focused on the Image Super Resolution model only. I would love to see this model on HuggingFace.
## Contribution: I would love to work on this!
I am new to HuggingFace and open-source in general. I am open to comments/suggestions/feedback.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper Link: https://arxiv.org/pdf/2108.10257v1.pdf
Implementation Link: https://github.com/JingyunLiang/SwinIR
Weights Link: https://github.com/JingyunLiang/SwinIR/releases/tag/v0.0
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21633/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21632
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21632/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21632/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21632/events
|
https://github.com/huggingface/transformers/pull/21632
| 1,584,654,640
|
PR_kwDOCUB6oc5J-IXb
| 21,632
|
Fix typo in documentation.
|
{
"login": "mmcdermott",
"id": 470751,
"node_id": "MDQ6VXNlcjQ3MDc1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/470751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmcdermott",
"html_url": "https://github.com/mmcdermott",
"followers_url": "https://api.github.com/users/mmcdermott/followers",
"following_url": "https://api.github.com/users/mmcdermott/following{/other_user}",
"gists_url": "https://api.github.com/users/mmcdermott/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmcdermott/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmcdermott/subscriptions",
"organizations_url": "https://api.github.com/users/mmcdermott/orgs",
"repos_url": "https://api.github.com/users/mmcdermott/repos",
"events_url": "https://api.github.com/users/mmcdermott/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmcdermott/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Replaces "the value used to module the next token probabilities" with "the value used to modulate the next token probabilities", which I think is what was originally meant.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21632/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21632",
"html_url": "https://github.com/huggingface/transformers/pull/21632",
"diff_url": "https://github.com/huggingface/transformers/pull/21632.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21632.patch",
"merged_at": 1676401230000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21631
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21631/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21631/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21631/events
|
https://github.com/huggingface/transformers/pull/21631
| 1,584,507,048
|
PR_kwDOCUB6oc5J9oYw
| 21,631
|
[XLN] Fix XLN
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Still a draft, will be fixing #21626 via a deprecation cycle",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21631). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,682
| 1,682
|
COLLABORATOR
| null |
# What does this PR do?
[This commit](https://github.com/huggingface/transformers/commit/87e6e4fe5c7e65cb69e70306f22de6daf16b6e14) broke the XLNet docstyle explaining what the output of the `create_mask` function should be.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21631/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21631",
"html_url": "https://github.com/huggingface/transformers/pull/21631",
"diff_url": "https://github.com/huggingface/transformers/pull/21631.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21631.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21630
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21630/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21630/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21630/events
|
https://github.com/huggingface/transformers/pull/21630
| 1,584,370,667
|
PR_kwDOCUB6oc5J9K2q
| 21,630
|
Fix generation config for empty state dict
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
This PR follows up on #21542 which didn't fix the problem for generative models. For those, the generation config can't be generated properly and returns another type of error than the one intercepted in `from_pretrained`. The fix is thus easy.
To make sure to catch the problem on all models, the test added in #21542 is graduated as a common test.
Fixes #21610
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21630/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21630",
"html_url": "https://github.com/huggingface/transformers/pull/21630",
"diff_url": "https://github.com/huggingface/transformers/pull/21630.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21630.patch",
"merged_at": 1676390248000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21629
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21629/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21629/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21629/events
|
https://github.com/huggingface/transformers/pull/21629
| 1,584,352,570
|
PR_kwDOCUB6oc5J9G52
| 21,629
|
Update data_collator.py
|
{
"login": "neeravkaushal",
"id": 48004241,
"node_id": "MDQ6VXNlcjQ4MDA0MjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/48004241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neeravkaushal",
"html_url": "https://github.com/neeravkaushal",
"followers_url": "https://api.github.com/users/neeravkaushal/followers",
"following_url": "https://api.github.com/users/neeravkaushal/following{/other_user}",
"gists_url": "https://api.github.com/users/neeravkaushal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neeravkaushal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neeravkaushal/subscriptions",
"organizations_url": "https://api.github.com/users/neeravkaushal/orgs",
"repos_url": "https://api.github.com/users/neeravkaushal/repos",
"events_url": "https://api.github.com/users/neeravkaushal/events{/privacy}",
"received_events_url": "https://api.github.com/users/neeravkaushal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Ah, got it! Thank you!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21629). All of your documentation changes will be reflected on that endpoint."
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
Fix 10% random token replacement.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21629/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21629",
"html_url": "https://github.com/huggingface/transformers/pull/21629",
"diff_url": "https://github.com/huggingface/transformers/pull/21629.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21629.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21628
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21628/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21628/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21628/events
|
https://github.com/huggingface/transformers/issues/21628
| 1,584,316,203
|
I_kwDOCUB6oc5ebr8r
| 21,628
|
Donut base-sized model, pre-trained only for a new language tutorial
|
{
"login": "Wyzix33",
"id": 13553412,
"node_id": "MDQ6VXNlcjEzNTUzNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/13553412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wyzix33",
"html_url": "https://github.com/Wyzix33",
"followers_url": "https://api.github.com/users/Wyzix33/followers",
"following_url": "https://api.github.com/users/Wyzix33/following{/other_user}",
"gists_url": "https://api.github.com/users/Wyzix33/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wyzix33/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wyzix33/subscriptions",
"organizations_url": "https://api.github.com/users/Wyzix33/orgs",
"repos_url": "https://api.github.com/users/Wyzix33/repos",
"events_url": "https://api.github.com/users/Wyzix33/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wyzix33/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Those questions are most suited to the [forums](https://discuss.huggingface.co/) where the whole community will be able to help. We keep issues for bugs and feature requests only.",
"OK, will do that! \r\nThanks"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### Feature request
I'm trying to generate a new pre-training for Donut model using Romanian language documents.
I have about 100k scanned documents and want to create a pre-training for Romanian language to be able to fine-tune different types of documents for parsing and classification on it.
Can anyone share or create a tutorial on how to generate new Donut pre-train model from scratch?
Thanks
### Motivation
Donut only have a a few language models
### Your contribution
I can share the pretrained model i create for Romanian language
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21628/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21627
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21627/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21627/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21627/events
|
https://github.com/huggingface/transformers/pull/21627
| 1,584,270,784
|
PR_kwDOCUB6oc5J81HJ
| 21,627
|
Error (also in original) model, scaling only q matrix not qk.T dot product (qk.T/sqrt(dim_per_head))
|
{
"login": "BenoitDalFerro",
"id": 69694610,
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenoitDalFerro",
"html_url": "https://github.com/BenoitDalFerro",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker and @younesbelkada \r\n\r\nAlso note that the same change would need to be applied to XLM.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @younesbelkada ok I'm on it thanks",
"@younesbelkada curious `make fixup` executed fine and passing `make repo_consistency` locally but not remotely any clue ?",
"This comment might help you: https://github.com/huggingface/transformers/pull/20939#issuecomment-1423974311\r\n\r\nLooking closer at your code, maybe a bad rebase happened on the `xlm` file, can you revert the changes there, and just modify the line as you did for flaubert, then run `make fix-copies` ?",
"@younesbelkada on a side note, the sqrt could be computed only once at init as self.sqrt_d and also with torch.sqrt() which is about 29x faster than math.sqrt() cf. benchmark https://twitter.com/k_saifullaah/status/1430510295257030658/photo/1",
"This is interesting, thanks for sharing! \r\nFeel free to add this change inside [`MultiHeadAttention`](https://github.com/huggingface/transformers/blob/d3b1adf59fff726f1c8f324728e562237f080ce6/src/transformers/models/xlm/modeling_xlm.py#L104) and then apply `make fix-copies` so that users can benefit from interesting speedups as you are suggesting\r\nOtherwise, this can be addressed in a follow up PR too",
"Will do next in new PR to ensure consistency with above title"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
As per Vaswani et al, 2017 p.4
Is torch.matmul(q, k.transpose(2, 3)) / math.sqrt(dim_per_head) not q / math.sqrt(dim_per_head) https://arxiv.org/pdf/1912.05372.pdf
Error was in original FlauBERT repo and effectively scales queries but not keys cf. https://github.com/getalp/Flaubert/pull/45/commits/6d176880ca3a1a8dfa2b76c97030bb51c5e917b8
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21627/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21627",
"html_url": "https://github.com/huggingface/transformers/pull/21627",
"diff_url": "https://github.com/huggingface/transformers/pull/21627.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21627.patch",
"merged_at": 1676403573000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21626
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21626/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21626/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21626/events
|
https://github.com/huggingface/transformers/issues/21626
| 1,584,261,695
|
I_kwDOCUB6oc5ebeo_
| 21,626
|
XLNet fails with attn_type "uni"
|
{
"login": "jppgks",
"id": 11156808,
"node_id": "MDQ6VXNlcjExMTU2ODA4",
"avatar_url": "https://avatars.githubusercontent.com/u/11156808?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jppgks",
"html_url": "https://github.com/jppgks",
"followers_url": "https://api.github.com/users/jppgks/followers",
"following_url": "https://api.github.com/users/jppgks/following{/other_user}",
"gists_url": "https://api.github.com/users/jppgks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jppgks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jppgks/subscriptions",
"organizations_url": "https://api.github.com/users/jppgks/orgs",
"repos_url": "https://api.github.com/users/jppgks/repos",
"events_url": "https://api.github.com/users/jppgks/events{/privacy}",
"received_events_url": "https://api.github.com/users/jppgks/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker and @younesbelkada ",
"This is a fairly old model ๐
It does make sense to drop `uni` (first because it is not working and did not bother anyone) but also let's just redirect to the new [TransformerXL](https://huggingface.co/docs/transformers/model_doc/transfo-xl). Thanks for reporting",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,684
| 1,684
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.104-linuxkit-aarch64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@thomwolf
### Information
- My own modified scripts
### Tasks
- An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
### Reproduction
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("xlnet-base-cased")
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
# Set attention type
model.transformer.attn_type = "uni"
inputs = tokenizer(["Hello, my dog is cute", "Hello, my dog is cute too"], return_tensors="pt", padding=True)
print(inputs)
outputs = model(**inputs)
```
Error:
```python-traceback
{'input_ids': tensor([[ 5, 17, 11368, 19, 94, 2288, 27, 10920, 4, 3],
[ 17, 11368, 19, 94, 2288, 27, 10920, 269, 4, 3]]), 'token_type_ids': tensor([[3, 0, 0, 0, 0, 0, 0, 0, 0, 2],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 2]]), 'attention_mask': tensor([[0, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
Traceback (most recent call last):
File "xlnet.py", line 70, in <module>
outputs = model(**inputs)
File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vscode/.local/lib/python3.8/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1547, in forward
transformer_outputs = self.transformer(
File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vscode/.local/lib/python3.8/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1161, in forward
attn_mask += data_mask[:, :, :, None]
RuntimeError: output with shape [10, 10, 1, 1] doesn't match the broadcast shape [10, 10, 2, 1]
```
### Expected behavior
Successful forward pass with the appropriate attention masks applied.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21626/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21625
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21625/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21625/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21625/events
|
https://github.com/huggingface/transformers/pull/21625
| 1,584,199,935
|
PR_kwDOCUB6oc5J8lpy
| 21,625
|
Add OPT resources to the transformers documentation
|
{
"login": "alissadb",
"id": 96190409,
"node_id": "U_kgDOBbu_yQ",
"avatar_url": "https://avatars.githubusercontent.com/u/96190409?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alissadb",
"html_url": "https://github.com/alissadb",
"followers_url": "https://api.github.com/users/alissadb/followers",
"following_url": "https://api.github.com/users/alissadb/following{/other_user}",
"gists_url": "https://api.github.com/users/alissadb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alissadb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alissadb/subscriptions",
"organizations_url": "https://api.github.com/users/alissadb/orgs",
"repos_url": "https://api.github.com/users/alissadb/repos",
"events_url": "https://api.github.com/users/alissadb/events{/privacy}",
"received_events_url": "https://api.github.com/users/alissadb/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot! I think when I saved the file in my editor it automatically changed the formatting. But anyway, I've reverted the formatting changes ๐ ",
"Thanks again for the changes, everything looks great! Pinging @sgugger for a final review, and then we can merge ๐ "
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #20055 (partially)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@stevhliu
Thanks in advance, if I miss anything please let me know :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21625/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21625/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21625",
"html_url": "https://github.com/huggingface/transformers/pull/21625",
"diff_url": "https://github.com/huggingface/transformers/pull/21625.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21625.patch",
"merged_at": 1676569468000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21624
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21624/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21624/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21624/events
|
https://github.com/huggingface/transformers/pull/21624
| 1,583,978,744
|
PR_kwDOCUB6oc5J71Tq
| 21,624
|
Generate: input expansion for any model input
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,684
| 1,676
|
MEMBER
| null |
# What does this PR do?
Fixes #21599
In line with #21603, this PR aims at generalizing `.generate()` for any-to-text models. In particular, it rewrites the function that expands the inputs when `num_beams>1` or `num_return_sequences>1` -- instead of expanding certain keywords within `model_kwargs`, expands any tensor therein. This assumes that all tensors in `model_kwargs` are per-row inputs, but that seems to be the case so far ๐
The TF case had a more complex change, as we had two functions to expand the inputs (depending on whether we wanted a new dimension or not). This PR also standardizes that distinction.
Slow tests were run for:
- [x] GPT2 (both frameworks)
- [x] T5 (both frameworks)
- [x] BLIP2 (PT)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21624/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21624/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21624",
"html_url": "https://github.com/huggingface/transformers/pull/21624",
"diff_url": "https://github.com/huggingface/transformers/pull/21624.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21624.patch",
"merged_at": 1676384182000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21622
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21622/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21622/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21622/events
|
https://github.com/huggingface/transformers/pull/21622
| 1,583,820,478
|
PR_kwDOCUB6oc5J7Tk-
| 21,622
|
Don't run tests that hit CTC loss calculation
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21622). All of your documentation changes will be reflected on that endpoint."
] | 1,676
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Any tests which hit CTC loss calculation - passing in labels into the model's `call` method - results in OOM errors. For these tests, the CTC models are removed to prevent CI failing.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21622/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21622",
"html_url": "https://github.com/huggingface/transformers/pull/21622",
"diff_url": "https://github.com/huggingface/transformers/pull/21622.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21622.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21621
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21621/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21621/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21621/events
|
https://github.com/huggingface/transformers/pull/21621
| 1,583,750,522
|
PR_kwDOCUB6oc5J7EsW
| 21,621
|
Update document of WhisperDecoderLayer
|
{
"login": "ling0322",
"id": 865872,
"node_id": "MDQ6VXNlcjg2NTg3Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/865872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ling0322",
"html_url": "https://github.com/ling0322",
"followers_url": "https://api.github.com/users/ling0322/followers",
"following_url": "https://api.github.com/users/ling0322/following{/other_user}",
"gists_url": "https://api.github.com/users/ling0322/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ling0322/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ling0322/subscriptions",
"organizations_url": "https://api.github.com/users/ling0322/orgs",
"repos_url": "https://api.github.com/users/ling0322/repos",
"events_url": "https://api.github.com/users/ling0322/events{/privacy}",
"received_events_url": "https://api.github.com/users/ling0322/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Perfect, thanks again!"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix the document of input `hidden_states` and `encoder_hidden_states` in `WhisperDecoderLayer.forward`
According to the document of `WhisperDecoder` the shape should be `(batch, seq_len, embed_dim)` not `(seq_len, batch, embed_dim)`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21621/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21621",
"html_url": "https://github.com/huggingface/transformers/pull/21621",
"diff_url": "https://github.com/huggingface/transformers/pull/21621.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21621.patch",
"merged_at": 1676557200000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21620
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21620/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21620/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21620/events
|
https://github.com/huggingface/transformers/issues/21620
| 1,583,697,486
|
I_kwDOCUB6oc5eZU5O
| 21,620
|
Unable to disable the `do_resize` option in the CLIPImageProcessor
|
{
"login": "adhakal224",
"id": 55991758,
"node_id": "MDQ6VXNlcjU1OTkxNzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/55991758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adhakal224",
"html_url": "https://github.com/adhakal224",
"followers_url": "https://api.github.com/users/adhakal224/followers",
"following_url": "https://api.github.com/users/adhakal224/following{/other_user}",
"gists_url": "https://api.github.com/users/adhakal224/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adhakal224/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adhakal224/subscriptions",
"organizations_url": "https://api.github.com/users/adhakal224/orgs",
"repos_url": "https://api.github.com/users/adhakal224/repos",
"events_url": "https://api.github.com/users/adhakal224/events{/privacy}",
"received_events_url": "https://api.github.com/users/adhakal224/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @adhakal224, thanks for raising! The reason the output images are still being returned with 224x224 shape is that there's two operations which modify the images' size: resizing and center cropping. In order for the image to be returned in the original dimension, cropping will also have to be disabled.\r\n\r\n```\r\nfrom transformers import CLIPImageProcessor\r\nimport torch\r\n\r\nbatched_tensor = torch.rand((2,3,512,512))\r\n\r\nprocessor = CLIPImageProcessor.from_pretrained(\"openai/clip-vit-base-patch32\")\r\nprocessed_image = processor(\r\n list(batched_tensor), \r\n return_tensors='pt', \r\n padding=True, \r\n do_resize=False, \r\n do_center_crop=False\r\n)\r\n\r\nprint(processed_image.pixel_values.shape)\r\n\r\nOutput:\r\ntorch.Size([2, 3, 512, 512])\r\n```\r\n\r\nA few things to note:\r\n* If working from the dev branch, you don't need to convert the input batch of images into a list (this is handled in the image processor)\r\n* You will only be able to return a batch of torch arrays (`return_tensors=\"pt\"`) if all of the input images are of the same dimension - in this case 512x512 - and `do_resize=False` and `do_center_crop=False`. \r\n* You mentioned `\"I need to resize them before sending it to the CLIPImageProcessor.\"`. If you're resizing just before passing into CLIPImageProcessor, and it's a standard resizing operation e.g. [torch's resize ](https://pytorch.org/vision/main/generated/torchvision.transforms.Resize.html) you could keep `do_resize=True` instead. Resizing is the first transformation the image processor so would be equivalent. However, it might be slower going through the image processor as it converts to PIL.Image.Image for resizing. ",
"Thanks for the reply @amyeroberts. Currently I have a large dataset that is saved in `WebDataset` format. It has a very large volume of images and many of them are of different sizes. When iterating through the webdataset I read the images as numpy arrays. I am using `pytorch_lightning` and so I need to wrap the `WebDataset` with a `DataLoader` (for which I need to resize them all to a fixed size) before sending them to the `trainer`. In my current pipeline, I am using `torch.transform.ToTensor()` and `torch.transform.Resize()` to convert the images to tensor and resize them and then sending them to CLIPImageProcessor. Is there a more efficient way I can be doing this?\r\n\r\nBelow is the class that creates and returns me the `webdataset`\r\n```\r\nclass MultiData:\r\n\r\n def __init__(self, wds_path):\r\n self.img_size = 224\r\n self.wds_path = wds_path\r\n self.dataset = wds.WebDataset(self.wds_path)\r\n print('Initializing dataset')\r\n\r\n def get_ds(self):\r\n self.dataset = self.dataset.shuffle(1000).decode('rgb').to_tuple(\"groundlevel.jpg\", \"overhead.jpg\", \"metadata.json\")\r\n self.dataset = self.dataset.map(self.do_transforms)\r\n return self.dataset\r\n\r\n def do_transforms(self, sample):\r\n img, imo, json = sample\r\n self.transforms_img = transforms.Compose([\r\n transforms.ToTensor(),\r\n transforms.Resize(size=(224,224), interpolation=transforms.InterpolationMode.BILINEAR)\r\n ])\r\n \r\n self.transforms_imo = transforms.Compose([\r\n transforms.ToTensor()\r\n ])\r\n\r\n img = self.transforms_img(img)\r\n imo = self.transforms_imo(imo)\r\n return img, imo, json\r\n```\r\nBoth `img` and `imo` are later passed to the processor as:\r\n`processed_img = self.image_processor(list(img), return_tensors='pt', padding=True).to(self.device)`\r\n`processed_imo = self.image_processor(list(imo), return_tensors='pt', padding=True).to(self.device)`",
"In the case of using very large datasets in PyTorch, I would recommend using torchvision transformations for the whole preprocessing pipeline in place of the image processors. The image processors are great for getting started, however they unfortunately aren't fast. We have examples of pipelines using the data stored in image processor and how to integrate them with torchvision e.g. [this one](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py). \r\n\r\nIn order to replicate the CLIP processing pipeline, I recommend looking at the [image processor code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/image_processing_clip.py) and the [corresponding configuration](https://huggingface.co/openai/clip-vit-base-patch32/blob/main/preprocessor_config.json) you're trying to emulate. \r\n\r\nIn the example you posted above, it would look something like this: \r\n```\r\n\r\nclass MultiData:\r\n def __init__(self, wds_path, image_size, image_mean, image_std):\r\n self.wds_path = wds_path\r\n self.dataset = wds.WebDataset(self.wds_path)\r\n self.transforms_img = transforms.Compose([\r\n transforms.Resize(size=image_size, interpolation=transforms.InterpolationMode.BILINEAR),\r\n transforms.CenterCrop(size=image_size),\r\n transforms.ToTensor(),\r\n transforms.Normalize(mean=image_processor.image_mean, std=image_processor.image_std),\r\n ])\r\n print('Initializing dataset')\r\n\r\n def get_ds(self):\r\n self.dataset = self.dataset.shuffle(1000).decode('rgb').to_tuple(\"groundlevel.jpg\", \"overhead.jpg\", \"metadata.json\")\r\n self.dataset = self.dataset.map(self.do_transforms)\r\n return self.dataset\r\n\r\n def do_transforms(self, sample):\r\n img, imo, json = sample \r\n img = self.transforms_img(img)\r\n imo = self.transforms_imo(imo)\r\n return img, imo, json\r\n\r\nimage_processor = AutoImageProcessor.from_pretrained(\"openai/clip-vit-base-patch32\")\r\nimage_size = image_processor.size[\"shortest_edge\"]\r\nmultidata = Multidata(wds_path, image_size, image_processor.image_mean, image_processor.image_std)\r\n```\r\n\r\nAdditional things to note:\r\n* In your [example above](https://github.com/huggingface/transformers/issues/21620#issuecomment-1429359464), the images are being resized to (224, 224). So even if center cropping was disabled as I suggested earlier, the output images wouldn't be of dimension (512, 512). \r\n* For [torchvision](https://pytorch.org/vision/main/generated/torchvision.transforms.Resize.html), the resulting output size of the image will be different for `Resize(a)` versus `Resize((a, a))`. The image processor emulates the first behaviour `Resize(a)`, where the shortest edge is resized to `a` and the longest edge resized to preserve the aspect ratio. \r\n* For `processed_img = self.image_processor(list(img), return_tensors='pt', padding=True).to(self.device)` - `padding` isn't a defined argument in the `image_process.preprocess` method. As such, it won't do anything and can be removed. ",
"I'm closing this issue, as `do_resize` was behaving as expected. "
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
```
- `transformers` version: 4.26.1
- Platform: Linux-3.10.0-1160.53.1.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@amyeroberts @NielsRogge @arthur
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have a situation where I convert my inputs to tensor and resize them before passing them to CLIPImageProcessor. Hence, I want to disable this resize operation inside the CLIPImageProcessor. However, when i pass `False` to the `do_resize` flag, the tensors that it returns are still resized to the default 224x224 size. Here is a reproducible code:
```
from transformers import CLIPImageProcessor
batched_tensor = tensor.rand((2,3,512,512))
processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32")
processed_image = processor(list(batched_tensor), return_tensors='pt', padding=True, do_resize=False)
print(processed_image['pixel_values'].shape)
Output:
torch.Size([2, 3, 224, 224])
```
I need to use a `DataLoader` which does not accept variable sizes input (which is the case with my data) and hence I need to resize them before sending it to the CLIPImageProcessor.
### Expected behavior
I would expect the output of the last line to be `torch.Size([2,3,512,512])`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21620/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21619
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21619/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21619/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21619/events
|
https://github.com/huggingface/transformers/pull/21619
| 1,583,671,125
|
PR_kwDOCUB6oc5J6zws
| 21,619
|
Fix passing kwargs to TFBertTokenizer
|
{
"login": "balvisio",
"id": 1909351,
"node_id": "MDQ6VXNlcjE5MDkzNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1909351?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balvisio",
"html_url": "https://github.com/balvisio",
"followers_url": "https://api.github.com/users/balvisio/followers",
"following_url": "https://api.github.com/users/balvisio/following{/other_user}",
"gists_url": "https://api.github.com/users/balvisio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/balvisio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balvisio/subscriptions",
"organizations_url": "https://api.github.com/users/balvisio/orgs",
"repos_url": "https://api.github.com/users/balvisio/repos",
"events_url": "https://api.github.com/users/balvisio/events{/privacy}",
"received_events_url": "https://api.github.com/users/balvisio/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Also cc @Rocketknight1 "
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes passing `kwargs` when creating a `TFBertTokenizer.from_pretrained()`. Currently, a `kwarg` is passed the following error is raised:
```
>>> from transformers import TFBertTokenizer
>>> tokenizer = TFBertTokenizer.from_pretrained("distilbert-base-cased", do_lower_case=False)
TypeError: transformers.models.bert.tokenization_bert_tf.TFBertTokenizer() got multiple values for keyword argument 'vocab_list'
```
By popping the arguments from `kwargs` we avoid the ambiguity.
@ArthurZucker You might be interested in this PR. Not clear to me which `kwargs` should actually be allowed when calling `from_pretrained()`. For example, `do_lower_case` makes sense but not sure if `vocab_list` or `cls_token_id` should be even allowed?
Thanks!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21619/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21619",
"html_url": "https://github.com/huggingface/transformers/pull/21619",
"diff_url": "https://github.com/huggingface/transformers/pull/21619.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21619.patch",
"merged_at": 1676470728000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21618
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21618/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21618/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21618/events
|
https://github.com/huggingface/transformers/issues/21618
| 1,583,527,180
|
I_kwDOCUB6oc5eYrUM
| 21,618
|
what's the format of my own datasets when running language-modeling with gpt2
|
{
"login": "ucas010",
"id": 50656998,
"node_id": "MDQ6VXNlcjUwNjU2OTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/50656998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ucas010",
"html_url": "https://github.com/ucas010",
"followers_url": "https://api.github.com/users/ucas010/followers",
"following_url": "https://api.github.com/users/ucas010/following{/other_user}",
"gists_url": "https://api.github.com/users/ucas010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ucas010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ucas010/subscriptions",
"organizations_url": "https://api.github.com/users/ucas010/orgs",
"repos_url": "https://api.github.com/users/ucas010/repos",
"events_url": "https://api.github.com/users/ucas010/events{/privacy}",
"received_events_url": "https://api.github.com/users/ucas010/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for questions like this as we keep the issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,679
| 1,679
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@vanpelt @pvl @arfon @xeb @kashif @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
want to know how to construct my own data,
and does it support chinese words with gpt2,
my dataset is Chinese sentence , the format is txt, and no label
could you pls help me ?
### Expected behavior
must have [CLS] in the beginning of the sentence ?
and what's the meaning of the "==" as follow (from wiki.test.raw) :
= Robert Boulter =
Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . This was followed by a starring role in the play Herons written by Simon Stephens , which was performed in 2001 at the Royal Court Theatre . He had a guest role in the television series Judge John Deed in 2002 . In 2004 Boulter landed a role as " Craig " in the episode " Teddy 's Story " of the television series The Long Firm ; he starred alongside actors Mark Strong and Derek Jacobi . He was cast in the 2005 theatre productions of the Philip Ridley play Mercury Fur , which was performed at the Drum Theatre in Plymouth and the Menier Chocolate Factory in London . He was directed by John Tiffany and starred alongside Ben Whishaw , Shane Zaza , Harry Kent , Fraser Ayres , Sophie Stanton and Dominic Hall .
In 2006 , Boulter starred alongside Whishaw in the play Citizenship written by Mark Ravenhill . He appeared on a 2006 episode of the television series , Doctors , followed by a role in the 2007 theatre production of How to Curse directed by Josie Rourke . How to Curse was performed at Bush Theatre in the London Borough of Hammersmith and Fulham . Boulter starred in two films in 2008 , Daylight Robbery by filmmaker Paris Leonti , and Donkey Punch directed by Olly Blackburn . In May 2008 , Boulter made a guest appearance on a two @-@ part episode arc of the television series Waking the Dead , followed by an appearance on the television series Survivors in November 2008 . He had a recurring role in ten episodes of the television series Casualty in 2010 , as " Kieron Fletcher " . Boulter starred in the 2011 film Mercenaries directed by Paris Leonti .
= = Career = =
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21618/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21617
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21617/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21617/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21617/events
|
https://github.com/huggingface/transformers/issues/21617
| 1,583,449,731
|
I_kwDOCUB6oc5eYYaD
| 21,617
|
[WHISPER] Unreliable timestamp with whisper for videos under 30 seconds
|
{
"login": "altryne",
"id": 463317,
"node_id": "MDQ6VXNlcjQ2MzMxNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/463317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/altryne",
"html_url": "https://github.com/altryne",
"followers_url": "https://api.github.com/users/altryne/followers",
"following_url": "https://api.github.com/users/altryne/following{/other_user}",
"gists_url": "https://api.github.com/users/altryne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/altryne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/altryne/subscriptions",
"organizations_url": "https://api.github.com/users/altryne/orgs",
"repos_url": "https://api.github.com/users/altryne/repos",
"events_url": "https://api.github.com/users/altryne/events{/privacy}",
"received_events_url": "https://api.github.com/users/altryne/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Can you provide a reproduction script to make sure we are running with the same parameters? ๐ \r\nAlso this might ring some bels to @Narsil. \r\nI know we interacted before, but just want to make sure which transformers version you are using and which calls. Then I'll be able to dig ! Thanks for the issue ๐ \r\n",
"@altryne @ArthurZucker .\r\n\r\nWhile deep diving into whisper, I've notived `openai/whisper` uses timestamp ALL the time, while `transformers` doesn't (you have to ask for timestamps for us to use them).\r\n\r\nI have seen BIG discrepancies on some examples, I am guessing because training was somehow biased with timestamps for whisper. \r\n\r\nCould that be it ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
### System Info
Hey, I noticed that there's an unreliable timestamp thing happening which whisper through transformers that doesn't show up in original whisper.
In this example:
https://targum.video/v/47160791e0e305ff7f22e84203f1b196
The "people on streches.." was said at the end of the video, but the timestamp was placed around 8 seconds in.
Here's the same video translated with large-2 whisper from OpenAI
https://targum.video/v/8c5e21ff6da8947c02cdb40097eadf50
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Can send the exact link where this is happens over DM, but basically the video from this tweet:
https://twitter.com/wxkaitlin/status/1625326828264071168
Is getting the wrong subtitles (some are missing) + a wrong timestamp
### Expected behavior
Expected the transformers whisper to behave exactly like (but faster) the openAI whisper
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21617/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21616
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21616/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21616/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21616/events
|
https://github.com/huggingface/transformers/issues/21616
| 1,583,442,194
|
I_kwDOCUB6oc5eYWkS
| 21,616
|
Different inference results from local transformer vs inference API
|
{
"login": "logandeboo",
"id": 49734611,
"node_id": "MDQ6VXNlcjQ5NzM0NjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/49734611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/logandeboo",
"html_url": "https://github.com/logandeboo",
"followers_url": "https://api.github.com/users/logandeboo/followers",
"following_url": "https://api.github.com/users/logandeboo/following{/other_user}",
"gists_url": "https://api.github.com/users/logandeboo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/logandeboo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/logandeboo/subscriptions",
"organizations_url": "https://api.github.com/users/logandeboo/orgs",
"repos_url": "https://api.github.com/users/logandeboo/repos",
"events_url": "https://api.github.com/users/logandeboo/events{/privacy}",
"received_events_url": "https://api.github.com/users/logandeboo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Narsil ",
"Small differences in numbers can be explained by hardware, torch version etc... Nothing can be done about it.\r\n\r\nFor the difference in output the API uses a different default from the pipeline `pipe = pipeline(..., topk=None)` as it makes more sense for the widget to see multiple proposition. \r\nIn addition the results are sorted for the API (again for UX).\r\n\r\nAre you able to reproduce larger than 1 results ? Seems like a pretty bad bug if true !",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am having the same issue. I though it may be due to me using TF instead of pytorch, or as was suggested by hardware differences. I am however seeing bigger difference than yours, the inference api gets me some positives while the local model some (false) negatives. "
] | 1,676
| 1,705
| 1,679
|
NONE
| null |
### System Info
I am getting two slightly different probability values when comparing inference results from the local transformer and inference API on the same sentence. I am wondering why this is happening? It only occurs for some sentences.
<img width="1617" alt="Screen Shot 2023-02-13 at 7 46 51 PM" src="https://user-images.githubusercontent.com/49734611/218634176-73911bbc-26a0-443c-8aac-96329a3d613f.png">
Moreover, the local transformer seems to select the highest probability result and return it alone compared to the API that returns a score for each label. Sometimes a score from the API is greater than 1 (have seen 9) and I am wondering why that is and am if it invalidates the results?
Cheers!
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
<img width="1612" alt="Screen Shot 2023-02-13 at 7 53 26 PM" src="https://user-images.githubusercontent.com/49734611/218635058-6322388b-2f50-48c3-8c5d-e3357a125002.png">
### Expected behavior
Naturally I expect each version of the model to produce the same score.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21616/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21615
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21615/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21615/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21615/events
|
https://github.com/huggingface/transformers/issues/21615
| 1,583,424,438
|
I_kwDOCUB6oc5eYSO2
| 21,615
|
run run_language_modeling got bug
|
{
"login": "ucas010",
"id": 50656998,
"node_id": "MDQ6VXNlcjUwNjU2OTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/50656998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ucas010",
"html_url": "https://github.com/ucas010",
"followers_url": "https://api.github.com/users/ucas010/followers",
"following_url": "https://api.github.com/users/ucas010/following{/other_user}",
"gists_url": "https://api.github.com/users/ucas010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ucas010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ucas010/subscriptions",
"organizations_url": "https://api.github.com/users/ucas010/orgs",
"repos_url": "https://api.github.com/users/ucas010/repos",
"events_url": "https://api.github.com/users/ucas010/events{/privacy}",
"received_events_url": "https://api.github.com/users/ucas010/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is an unmaintained example that won't work with the last version of transformers.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,679
| 1,679
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@vanpelt @pvl @arfon @xeb @kashif @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
link:https://github.com/huggingface/transformers/blob/main/examples/legacy/run_language_modeling.py
```
export TRAIN_FILE=/path/to/dataset/wiki.train.raw
export TEST_FILE=/path/to/dataset/wiki.test.raw
python run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
```
error:
Traceback (most recent call last):
File "/data/transformers/examples/legacy/run_language_modeling.py", line 375, in <module>
main()
File "/data/transformers/examples/legacy/run_language_modeling.py", line 291, in main
data_args.block_size = tokenizer.max_len
AttributeError: 'GPT2TokenizerFast' object has no attribute 'max_len'
### Expected behavior
looking forward to kind reply
and solve the problem
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21615/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21614
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21614/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21614/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21614/events
|
https://github.com/huggingface/transformers/pull/21614
| 1,583,388,482
|
PR_kwDOCUB6oc5J53y-
| 21,614
|
fix: Race Condition when using Sagemaker Checkpointing and Model Repository
|
{
"login": "DougTrajano",
"id": 8703022,
"node_id": "MDQ6VXNlcjg3MDMwMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8703022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DougTrajano",
"html_url": "https://github.com/DougTrajano",
"followers_url": "https://api.github.com/users/DougTrajano/followers",
"following_url": "https://api.github.com/users/DougTrajano/following{/other_user}",
"gists_url": "https://api.github.com/users/DougTrajano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DougTrajano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DougTrajano/subscriptions",
"organizations_url": "https://api.github.com/users/DougTrajano/orgs",
"repos_url": "https://api.github.com/users/DougTrajano/repos",
"events_url": "https://api.github.com/users/DougTrajano/events{/privacy}",
"received_events_url": "https://api.github.com/users/DougTrajano/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Perfect, you just need to run `make style` on your branch with our quality tools installed and we should be good to merge!",
"> Perfect, you just need to run `make style` on your branch with our quality tools installed and we should be good to merge!\r\n\r\nok, let me do that so.",
"@sgugger sorry for the delay, some meetings here :/\r\n\r\nit's done"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #21586
With the following changes:
- Added `_add_sm_patterns_to_gitignore()` as a helper method in the Trainer class that will add the patterns used by the SageMaker Checkpointing feature in the .gitignore file when initializing the Model Repository.
- A condition in the `init_git_repo()` to check if we have an important SageMaker environment variable
It also includes a fix in the huggingface_hub library in order to consider excluding patterns when defining large files: https://github.com/huggingface/huggingface_hub/pull/1339
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger as we discussed this issue in the https://github.com/huggingface/transformers/issues/21586
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21614/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21614",
"html_url": "https://github.com/huggingface/transformers/pull/21614",
"diff_url": "https://github.com/huggingface/transformers/pull/21614.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21614.patch",
"merged_at": 1676409098000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21613
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21613/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21613/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21613/events
|
https://github.com/huggingface/transformers/issues/21613
| 1,583,319,155
|
I_kwDOCUB6oc5eX4hz
| 21,613
|
`pipeline` does not load from local folder, instead, it always downloads models from the internet.
|
{
"login": "z7ye",
"id": 25996703,
"node_id": "MDQ6VXNlcjI1OTk2NzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/25996703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/z7ye",
"html_url": "https://github.com/z7ye",
"followers_url": "https://api.github.com/users/z7ye/followers",
"following_url": "https://api.github.com/users/z7ye/following{/other_user}",
"gists_url": "https://api.github.com/users/z7ye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/z7ye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/z7ye/subscriptions",
"organizations_url": "https://api.github.com/users/z7ye/orgs",
"repos_url": "https://api.github.com/users/z7ye/repos",
"events_url": "https://api.github.com/users/z7ye/events{/privacy}",
"received_events_url": "https://api.github.com/users/z7ye/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Narsil ",
"Seems to be working fine on my end, but there needs to be a few modifications (I'm super suprises it *can* download anything, your included code just crashes normally).\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(task=\"image-classification\", model=\"google/vit-base-patch16-224\")\r\npipe.save_pretrained(\"./local_vit\")\r\n\r\npipe = pipeline(task=\"image-classification\", model=\"./local_vit\")\r\n```\r\n\r\nYou need to specify the `task` for local, since that information is contained in the HUB, not in the config directly. So if you specify it, it works with everything locally.\r\n\r\nOn what version of transformers are you on ?",
"Hi, sorry I specified the task for local, and it still does not work. I will check the version. ",
"the version is 4.24.0",
"I cannot reproduce even on 4.24.0 Can you include the full script + error ? ",
"sure. I am still seeing the error. I will post the code later.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,680
| 1,680
|
NONE
| null |
### System Info
I create `pipeline` and called `save_pretrained(...)` to save to some local directory. However, when I load it back using `pipeline(model="local_folder")`, it either load from cache or try to start downloading from the internet.
However, if I do the following, it works. I am using the latest `transformers`. am I misused it or misunderstood something?
```
mypipeline.save_pretrained(save_directory=model_path)
mypipeline.model.config.use_pretrained_backbone = False
mypipeline.model.config.save_pretrained(save_directory=model_path)
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. run the code below
```
from transformers import pipeline
vision_classifier = pipeline(task="image-classification", model="google/vit-base-patch16-224")
vision_classifier.save_pretrained("./huggingface")
```
2. now delete the cache
3. load the model now
```
vision_classifier = pipeline("./huggingface")
```
and it will start download the pretrained model again.
### Expected behavior
I expect it loads the model from the local folder
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21613/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21612
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21612/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21612/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21612/events
|
https://github.com/huggingface/transformers/pull/21612
| 1,583,243,028
|
PR_kwDOCUB6oc5J5ZEQ
| 21,612
|
fix: Change is_last chunk calc and add conditional break in chunk_iter
|
{
"login": "connor-henderson",
"id": 78612354,
"node_id": "MDQ6VXNlcjc4NjEyMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-henderson",
"html_url": "https://github.com/connor-henderson",
"followers_url": "https://api.github.com/users/connor-henderson/followers",
"following_url": "https://api.github.com/users/connor-henderson/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions",
"organizations_url": "https://api.github.com/users/connor-henderson/orgs",
"repos_url": "https://api.github.com/users/connor-henderson/repos",
"events_url": "https://api.github.com/users/connor-henderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-henderson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ArthurZucker Could you please review this ?\r\n\r\nI am ok with this PR, but since I made the proposed change, I'm too heavily biased to review properly.\r\n\r\nI will take another look still now that I might be able to think straighter now.\r\n",
"Just thinking we should update the values of the whisper timestamps (so try running the slow tests for the ASR pipeline, only those related to whisper)",
"> Just thinking we should update the values of the whisper timestamps (so try running the slow tests for the ASR pipeline, only those related to whisper)\r\n\r\nUpdated the timestamps after running the below slow asr pipeline tests:\r\n`test_return_timestamps_in_preprocess`\r\n`test_torch_whisper`\r\n`test_find_longest_common_subsequence`\r\n`test_whisper_timestamp_prediction` (only update was here)\r\n`test_simple_whisper_asr`\r\n`test_simple_whisper_translation`",
"Thank you very much for this ! And detecting this bug !\r\n",
"Thanks again for your contribution!"
] | 1,676
| 1,679
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #21568
With three functional changes
- Updates the is_last calc to include the right_stride
- Conditionally breaks at the end of the loop block if is_last is true
- Adds to a test
And two non-functional changes
- Renamed `i` to `chunk_start_idx` and put `chunk_end_idx` in a variable
- Removed a comment and added another
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
[Issue](https://github.com/huggingface/transformers/issues/21568)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21612/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21612",
"html_url": "https://github.com/huggingface/transformers/pull/21612",
"diff_url": "https://github.com/huggingface/transformers/pull/21612.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21612.patch",
"merged_at": 1677223833000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21611
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21611/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21611/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21611/events
|
https://github.com/huggingface/transformers/pull/21611
| 1,583,065,934
|
PR_kwDOCUB6oc5J4yoZ
| 21,611
|
Add in big model inference to issue template
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds in @sgugger and myself as @'s on the big model inference as part of the issue template
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21611/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21611/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21611",
"html_url": "https://github.com/huggingface/transformers/pull/21611",
"diff_url": "https://github.com/huggingface/transformers/pull/21611.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21611.patch",
"merged_at": 1676324434000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21610
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21610/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21610/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21610/events
|
https://github.com/huggingface/transformers/issues/21610
| 1,583,028,640
|
I_kwDOCUB6oc5eWxmg
| 21,610
|
from_pretrained() breaks with empty state_dict and model path as None
|
{
"login": "harubaru",
"id": 26317155,
"node_id": "MDQ6VXNlcjI2MzE3MTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/26317155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harubaru",
"html_url": "https://github.com/harubaru",
"followers_url": "https://api.github.com/users/harubaru/followers",
"following_url": "https://api.github.com/users/harubaru/following{/other_user}",
"gists_url": "https://api.github.com/users/harubaru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harubaru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harubaru/subscriptions",
"organizations_url": "https://api.github.com/users/harubaru/orgs",
"repos_url": "https://api.github.com/users/harubaru/repos",
"events_url": "https://api.github.com/users/harubaru/events{/privacy}",
"received_events_url": "https://api.github.com/users/harubaru/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This was fixed by #21542. You will need to install from source while the fix makes it way to the next release.",
"The PR didn't seem to fix the issue, I get a different error this time but the reproduction code still throws an error with the latest changes from source",
"Indeed, the fix was incomplete for generative models. The PR linked above should fully fix it."
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @youne
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
To reproduce the behavior, you can run this snippet of code:
```py
from transformers import GPTJConfig, AutoModelForCausalLM
from collections import OrderedDict
config = GPTJConfig(
n_positions=128,
n_embd=16,
n_layer=2,
n_head=2
)
AutoModelForCausalLM.from_pretrained(
None, config=config, state_dict=OrderedDict()
)
```
### Expected behavior
The expected behavior from this is to allow a model to be initialized from scratch with an empty `state_dict` and `None` as the pretrained model. There is a [tool that I am working with](https://github.com/coreweave/tensorizer) that is broken due to a regression that was introduced around version 4.23.1. From the trace of running the reproduction code, a PR that handles the case when `resolved_archive_file` and the pretrained model path are `None` would fix this issue.
It also seems that this issue is related: https://github.com/huggingface/transformers/issues/21526
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21610/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21609
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21609/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21609/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21609/events
|
https://github.com/huggingface/transformers/pull/21609
| 1,582,907,684
|
PR_kwDOCUB6oc5J4Qld
| 21,609
|
Fix env. variable type issue in testing
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
Fix env. variable type issue in testing.
If `PYTEST_TIMEOUT` is set by `export PYTEST_TIMEOUT=...` or `PYTEST_TIMEOUT=xxx python3 -m pytest ...`, we actually get a `string` instead of `int`, and the test fails.
On our CI (within docker), we don't have this issue, probably due to the way of docker dealing with env. variable.
Let's try to avoid unexpected error though.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21609/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21609",
"html_url": "https://github.com/huggingface/transformers/pull/21609",
"diff_url": "https://github.com/huggingface/transformers/pull/21609.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21609.patch",
"merged_at": 1676318006000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21608
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21608/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21608/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21608/events
|
https://github.com/huggingface/transformers/pull/21608
| 1,582,858,113
|
PR_kwDOCUB6oc5J4GTX
| 21,608
|
Fix typo in QA task guide
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
MEMBER
| null |
Removes random link to ALBERT model doc in the question-answering task guide.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21608/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21608",
"html_url": "https://github.com/huggingface/transformers/pull/21608",
"diff_url": "https://github.com/huggingface/transformers/pull/21608.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21608.patch",
"merged_at": 1676404939000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21607
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21607/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21607/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21607/events
|
https://github.com/huggingface/transformers/pull/21607
| 1,582,847,606
|
PR_kwDOCUB6oc5J4EJ6
| 21,607
|
Clarify available pipelines in quicktour
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
MEMBER
| null |
Addresses feedback from #21557 to make it super clear the table doesn't contain all available pipelines and redirects users to the pipeline API reference docs instead. This PR also swaps out some of the NLP tasks for some more multimodal ones :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21607/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21607",
"html_url": "https://github.com/huggingface/transformers/pull/21607",
"diff_url": "https://github.com/huggingface/transformers/pull/21607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21607.patch",
"merged_at": 1676317069000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21606
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21606/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21606/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21606/events
|
https://github.com/huggingface/transformers/pull/21606
| 1,582,590,976
|
PR_kwDOCUB6oc5J3NVP
| 21,606
|
Fix TF CTC tests
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh before @amyeroberts work in #21502 these tests had no overwrite, correct. However, see the note in Amy's PR -- the labels were not correctly handled, which meant that `test_dataset_conversion` was probably terminating early [here](https://github.com/huggingface/transformers/blob/edc1e734bfc01109b8c66881d950ebbda032a6d2/tests/test_modeling_tf_common.py#L1815), which would explain the ausence of crashes prior to the PR :)",
"Thanks for the fix @gante ! \r\n\r\n@ydshieh Yes, Joao's correct. There's three different cases where the state of the tests changed: \r\n* For `test_loss_computation`, the previous test was being skipped. [The given reason](https://github.com/huggingface/transformers/blob/6f79d264422245d88c7a34032c1a8254a0c65752/tests/models/hubert/test_modeling_tf_hubert.py#L327) was incorrect shapes - however, at the time of adding #21502 the returned loss was actually `None`.\r\n* Some tests were being skipped by overloading [with an empty test](https://github.com/huggingface/transformers/blob/6f79d264422245d88c7a34032c1a8254a0c65752/tests/models/hubert/test_modeling_tf_hubert.py#L313) rather than `unittest.skip` and so previously showed as passing.\r\n* The branch calculating loss wasn't being touched when fitting the model - `input_values` passed in as a dictionary - as `labels` was [taken from keyword arguments, rather than `outputs['labels']`](https://github.com/huggingface/transformers/blob/6f79d264422245d88c7a34032c1a8254a0c65752/src/transformers/models/hubert/modeling_tf_hubert.py#L1640). This meant some tests e.g. [`test_keras_fit` in TFModelTesterMixin](https://github.com/huggingface/transformers/blob/6f79d264422245d88c7a34032c1a8254a0c65752/tests/test_modeling_tf_common.py#L1526) previously passed as the memory intensive operation was skipped. "
] | 1,676
| 1,676
| 1,676
|
MEMBER
| null |
# What does this PR do?
(see title)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21606/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21606",
"html_url": "https://github.com/huggingface/transformers/pull/21606",
"diff_url": "https://github.com/huggingface/transformers/pull/21606.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21606.patch",
"merged_at": 1676323381000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21605
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21605/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21605/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21605/events
|
https://github.com/huggingface/transformers/issues/21605
| 1,582,437,063
|
I_kwDOCUB6oc5eUhLH
| 21,605
|
Huge JSON file causes error in run_summarization.py
|
{
"login": "AtheerAlgherairy",
"id": 7421065,
"node_id": "MDQ6VXNlcjc0MjEwNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7421065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AtheerAlgherairy",
"html_url": "https://github.com/AtheerAlgherairy",
"followers_url": "https://api.github.com/users/AtheerAlgherairy/followers",
"following_url": "https://api.github.com/users/AtheerAlgherairy/following{/other_user}",
"gists_url": "https://api.github.com/users/AtheerAlgherairy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AtheerAlgherairy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AtheerAlgherairy/subscriptions",
"organizations_url": "https://api.github.com/users/AtheerAlgherairy/orgs",
"repos_url": "https://api.github.com/users/AtheerAlgherairy/repos",
"events_url": "https://api.github.com/users/AtheerAlgherairy/events{/privacy}",
"received_events_url": "https://api.github.com/users/AtheerAlgherairy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @AtheerAlgherairy This is not a `transformers` issue, and it's very likely some memory issue due to the huge file size. [Hugging Face Forum](https://discuss.huggingface.co/) is the place for such kind of questions.\r\n\r\nWithout going into the specific detail, for large dataset like this, you should try to use iterator to avoid loading the whole file into the memory from the beginning.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,679
| 1,679
|
NONE
| null |
I tried to run transformers\examples\pytorch\summarization\run_summarization.py with my own data files (in JSON Lines format). The problem arises only when using huge JSON file (> 20 GB). With smaller size files it works normally.
Any suggestions how to use the code with large size JSON files?
**Framework:**
Transformers 4.20.1
Pytorch 1.11.0+cu113
Datasets 2.3.2
Tokenizers 0.12.1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21605/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21604
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21604/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21604/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21604/events
|
https://github.com/huggingface/transformers/issues/21604
| 1,582,363,876
|
I_kwDOCUB6oc5eUPTk
| 21,604
|
Seq2SeqTrainer with predict_with_generate prints config every step
|
{
"login": "eyalmazuz",
"id": 34383384,
"node_id": "MDQ6VXNlcjM0MzgzMzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/34383384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyalmazuz",
"html_url": "https://github.com/eyalmazuz",
"followers_url": "https://api.github.com/users/eyalmazuz/followers",
"following_url": "https://api.github.com/users/eyalmazuz/following{/other_user}",
"gists_url": "https://api.github.com/users/eyalmazuz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyalmazuz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyalmazuz/subscriptions",
"organizations_url": "https://api.github.com/users/eyalmazuz/orgs",
"repos_url": "https://api.github.com/users/eyalmazuz/repos",
"events_url": "https://api.github.com/users/eyalmazuz/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyalmazuz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"Hey @eyalmazuz ๐ There is a chance that the issue is sorted already (see [here](https://github.com/huggingface/transformers/pull/21385)). To try it out, install `transformers` from `main` -- let me know if it works!\r\n\r\n`pip install --upgrade git+https://github.com/huggingface/transformers.git`",
"Hi @gante there's no extra printing when using transformers-main I'll close the issue\r\nthank you for your help\r\n\r\n"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
Transformers version 4.26.1
Python version 3.8.12
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm working on a Hebrew-Arabic machine translation task by training the T5 model from scratch
The issue occurs with the Translation code from the tutorial in the huggingface's course:
https://huggingface.co/course/chapter7/4?fw=pt
I have a slightly modified version of it (extra metrics and a slightly different dataset)
here's a link to my code:
https://pastebin.com/LaCAPsCF
``python3 train_model.py --dataset_path ./data/HF_HE_AR_Dataset.json --tokenizer_path ./T5Tokenizer/ --max_length=128 --batch_size=16 --logging_steps 100 --save_steps 100 --model t5-base``
this is was the command that I ran but it doesn't matter much since the problem is with ``predict_with_generate=True,`` in the Seq2SeqTrainingArguments. If it's false the problem do not occurs
This is what happens when it reaches the evaluation loop

### Expected behavior
Only the TQDM bar is printed
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21604/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/21604/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21603
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21603/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21603/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21603/events
|
https://github.com/huggingface/transformers/pull/21603
| 1,582,315,399
|
PR_kwDOCUB6oc5J2SCj
| 21,603
|
Generate: filter encoder inputs when its signature does not accept wildcards
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Tests for the feature added ๐ \r\n\r\nThe failing CI tests are two known failures, merging."
] | 1,676
| 1,676
| 1,676
|
MEMBER
| null |
# What does this PR do?
Now that we are moving towards any-to-text modalities, `generate` should be strengthened to work as-is without throwing exceptions just because a user has designed a slightly different architecture.
This PR enables the case where the model has some kwargs for operations between the encoder and the decoder -- i.e. for kwargs that can't be used in the encoder, but are also not decoder inputs. Normally, when a kwarg is not an encoder input, a `decoder_` prefix is added to its name, which is not the right argument naming in this case. Fortunately, the solution is simple :D
This PR is a soft requirement to integrate [MM-CoT](https://github.com/amazon-science/mm-cot/tree/main), the alternative being the incorrect renaming of a few arguments to `decoder_(...)`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21603/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21603",
"html_url": "https://github.com/huggingface/transformers/pull/21603",
"diff_url": "https://github.com/huggingface/transformers/pull/21603.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21603.patch",
"merged_at": 1676371606000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21602
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21602/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21602/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21602/events
|
https://github.com/huggingface/transformers/pull/21602
| 1,582,274,437
|
PR_kwDOCUB6oc5J2JLW
| 21,602
|
[MINOR] Fix link in timeseries transformer docs
|
{
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Link seems to work in the staging docs"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
I'm not sure this will also fix the currently broken link in the docs (Specifically here: https://huggingface.co/docs/transformers/model_doc/time_series_transformer) whereby clicking on `kashif` attempts to link to the following non-existent URL: https://huggingface.co/docs/transformers/model_doc/%3Chttps://huggingface.co/kashif
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a broken link in the above-linked documentation.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
I'm not sure. @sgugger or @osanseviero maybe?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21602/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21602",
"html_url": "https://github.com/huggingface/transformers/pull/21602",
"diff_url": "https://github.com/huggingface/transformers/pull/21602.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21602.patch",
"merged_at": 1676301076000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21601
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21601/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21601/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21601/events
|
https://github.com/huggingface/transformers/pull/21601
| 1,582,191,913
|
PR_kwDOCUB6oc5J13EC
| 21,601
|
CI: skip failing TF hubert test
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @amyeroberts ",
"_The documentation is not available anymore as the PR was closed or merged._",
"@gante is this issue fixed? I am asking because I am still getting this error [here](https://app.circleci.com/pipelines/github/huggingface/transformers/57650/workflows/dcdbb1bd-f61c-445a-b276-4f649416931e/jobs/699951?invite=true#step-111-4137) in this [PR](https://github.com/huggingface/transformers/pull/21349) . I did rebase to `upstream/main` .",
"@susnato see #21606 (I skipped the wrong one here)",
"Hi @gante thanks for the fix but now there seems to be a new issue popping up regarding that same model, \r\n```\r\ntests/models/hubert/test_modeling_tf_hubert.py::TFHubertRobustModelTest::test_keras_fit\r\n```\r\nThis test is failing in the same PR I mentioned above. I did rebase before pushing."
] | 1,676
| 1,684
| 1,676
|
MEMBER
| null |
# What does this PR do?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21601/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21601",
"html_url": "https://github.com/huggingface/transformers/pull/21601",
"diff_url": "https://github.com/huggingface/transformers/pull/21601.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21601.patch",
"merged_at": 1676298863000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21600
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21600/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21600/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21600/events
|
https://github.com/huggingface/transformers/pull/21600
| 1,582,155,027
|
PR_kwDOCUB6oc5J1vD8
| 21,600
|
[Pipeline] Add zero shot audio classificatoin pipeline
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21600). All of your documentation changes will be reflected on that endpoint.",
"LGTM. I'm confused by the error in tests which doesn't seem linked to this PR.",
"LGTM ! "
] | 1,676
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Add the `zero_shot_audio_classification_pipeline` for the `CLAP` models. See #21370
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21600/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21600/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21600",
"html_url": "https://github.com/huggingface/transformers/pull/21600",
"diff_url": "https://github.com/huggingface/transformers/pull/21600.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21600.patch",
"merged_at": 1677494625000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21599
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21599/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21599/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21599/events
|
https://github.com/huggingface/transformers/issues/21599
| 1,582,153,299
|
I_kwDOCUB6oc5eTb5T
| 21,599
|
BLIP-2 batch generate error
|
{
"login": "LiJunnan1992",
"id": 13638455,
"node_id": "MDQ6VXNlcjEzNjM4NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/13638455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LiJunnan1992",
"html_url": "https://github.com/LiJunnan1992",
"followers_url": "https://api.github.com/users/LiJunnan1992/followers",
"following_url": "https://api.github.com/users/LiJunnan1992/following{/other_user}",
"gists_url": "https://api.github.com/users/LiJunnan1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LiJunnan1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LiJunnan1992/subscriptions",
"organizations_url": "https://api.github.com/users/LiJunnan1992/orgs",
"repos_url": "https://api.github.com/users/LiJunnan1992/repos",
"events_url": "https://api.github.com/users/LiJunnan1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/LiJunnan1992/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Can confirm that:\r\n1. The issue only happens with `batch_size>1`\r\n2. The issue happens with both `num_beams>1` and `num_return_sequences>1` (they both rely on input replication, which is my suspicion)\r\n3. https://github.com/huggingface/transformers/pull/21580, which addresses some BLIP2 `.generate()` issues does not fix this issue",
"I think that [this](https://github.com/salesforce/LAVIS/blob/3ac397aa075c3e60b9521b012dda3660e3e35f1e/lavis/models/blip2_models/blip2_opt.py#L213) is not incorporated in our current implementation. It seems the authors only defined this for OPT but not for T5.",
"@NielsRogge yeah, that's the fix ๐ However, I'm generalizing the PR to correctly expand any model input, as it is a bit limited at the moment (it is expanding tensors with specific names, as opposed to all tensors that might be used as model input).",
"FYI, I have tried to do repeat_interleave for the inputs_embeds, but that results in another error.\r\nT5 does not seem need this because of the encoder-decoder architecture.",
"@LiJunnan1992 that's correct, only non-encoder inputs need the expansion. In a nutshell, in the generation loop, we have a 1:1 input-output row correspondence, so we need to expand the model inputs before the loop accordingly.\r\n\r\nT5-BLIP2 sends `inputs_embeds` to the (text) encoder, whereas OPT-BLIP2 has no (text) encoder at all. In the former, the encoder outputs need to be expanded, whereas in the latter the `inputs_embeds` need the expansion treatment.\r\n\r\n#21624 takes care of all those cases for any model input in `model_kwargs`, which should future-proof `.generate()`",
"@LiJunnan1992 if you install `transformers` from `main`, it should be working ๐ ",
"@gante I can verify that this is working now, thanks! When will I be able to pip install this version?",
"@LiJunnan1992 we aim at monthly releases, so 1-2 weeks from now :)"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
`transformers` version: 4.27.0.dev0
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The following error happens when running BLIP-2 model's generate() with num_beams>1 and input batch_size>1
<img width="892" alt="Screenshot 2023-02-13 at 6 53 28 pm" src="https://user-images.githubusercontent.com/13638455/218443029-8bc45f00-4884-4785-95a9-228aa08d1266.png">
### Expected behavior
The model should be able to do batch generate.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21599/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21598
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21598/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21598/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21598/events
|
https://github.com/huggingface/transformers/pull/21598
| 1,582,142,530
|
PR_kwDOCUB6oc5J1sUD
| 21,598
|
Enable `requires_grad` on input embedding to train on top of frozen layers
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks everyone! Merging!"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
## Motivation
In the context of `peft`, users currently needs to manually add a forward hook that enables gradient computation to the input after computing the embedding, for e.g. for `t5` one needs to call:
```python
def make_inputs_require_grad(module, input, output):
output.requires_grad_(True)
model.get_input_embeddings().register_forward_hook(make_inputs_require_grad)
```
This PR makes the life easy for the users by wrapping this protocol in a single method `enable_input_require_grads` | Related: https://github.com/huggingface/peft/issues/80
cc @pacman100
Wdyt @sgugger ? Maybe there is a better solution but not sure here, would love some guidance!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21598/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21598",
"html_url": "https://github.com/huggingface/transformers/pull/21598",
"diff_url": "https://github.com/huggingface/transformers/pull/21598.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21598.patch",
"merged_at": 1676364187000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21597
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21597/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21597/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21597/events
|
https://github.com/huggingface/transformers/pull/21597
| 1,582,121,988
|
PR_kwDOCUB6oc5J1nv1
| 21,597
|
[`bnb`] Let's make the daily CI green ๐
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yeah I am unsure too, `auto` shold give `float16` according to the weights size: https://huggingface.co/bigscience/bloom-1b7/tree/main (1b7 parameters = 3.4GB in fp16)"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a test that is currently failing on the `main` branch.
Link to failing test: https://github.com/huggingface/transformers/actions/runs/4154270129/jobs/7186572507
Since the introduction of https://github.com/huggingface/transformers/pull/21524 - when loading `bigscience/bloom-1b7` it detects `fp32` weights when using `torch_dtype="auto"`. Therefore the expected relative difference of the memory footprint is different than the expected one.
This PR fixes the test by forcing `torch_dtype=torch.float16` when loading the fp16 model
cc @ydshieh @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21597/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21597",
"html_url": "https://github.com/huggingface/transformers/pull/21597",
"diff_url": "https://github.com/huggingface/transformers/pull/21597.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21597.patch",
"merged_at": 1676301531000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21595
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21595/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21595/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21595/events
|
https://github.com/huggingface/transformers/pull/21595
| 1,582,005,789
|
PR_kwDOCUB6oc5J1ONE
| 21,595
|
Fix Blip-2 CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"since we have put `_keep_in_fp32_modules` [in a previous PR](https://github.com/huggingface/transformers/pull/21574), I think it should work!",
"@sgugger I never merge my PR without running on a CI runner-like machines: ran it 3 times and all pass.",
"Yes I think we can be confident to say that @sgugger "
] | 1,676
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
Avoid GPU OOM by using FP16.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21595/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21595",
"html_url": "https://github.com/huggingface/transformers/pull/21595",
"diff_url": "https://github.com/huggingface/transformers/pull/21595.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21595.patch",
"merged_at": 1676303067000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21594
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21594/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21594/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21594/events
|
https://github.com/huggingface/transformers/pull/21594
| 1,581,783,685
|
PR_kwDOCUB6oc5J0efA
| 21,594
|
Remove trailing 'extractive' word from en documentation
|
{
"login": "tpaviot",
"id": 660130,
"node_id": "MDQ6VXNlcjY2MDEzMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/660130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tpaviot",
"html_url": "https://github.com/tpaviot",
"followers_url": "https://api.github.com/users/tpaviot/followers",
"following_url": "https://api.github.com/users/tpaviot/following{/other_user}",
"gists_url": "https://api.github.com/users/tpaviot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tpaviot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tpaviot/subscriptions",
"organizations_url": "https://api.github.com/users/tpaviot/orgs",
"repos_url": "https://api.github.com/users/tpaviot/repos",
"events_url": "https://api.github.com/users/tpaviot/events{/privacy}",
"received_events_url": "https://api.github.com/users/tpaviot/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Removes an extra word from the documentation.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21594/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21594",
"html_url": "https://github.com/huggingface/transformers/pull/21594",
"diff_url": "https://github.com/huggingface/transformers/pull/21594.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21594.patch",
"merged_at": 1676300941000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21593
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21593/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21593/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21593/events
|
https://github.com/huggingface/transformers/pull/21593
| 1,581,778,018
|
PR_kwDOCUB6oc5J0dO8
| 21,593
|
Region error
|
{
"login": "Aniketsingh12",
"id": 67466295,
"node_id": "MDQ6VXNlcjY3NDY2Mjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/67466295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aniketsingh12",
"html_url": "https://github.com/Aniketsingh12",
"followers_url": "https://api.github.com/users/Aniketsingh12/followers",
"following_url": "https://api.github.com/users/Aniketsingh12/following{/other_user}",
"gists_url": "https://api.github.com/users/Aniketsingh12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aniketsingh12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aniketsingh12/subscriptions",
"organizations_url": "https://api.github.com/users/Aniketsingh12/orgs",
"repos_url": "https://api.github.com/users/Aniketsingh12/repos",
"events_url": "https://api.github.com/users/Aniketsingh12/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aniketsingh12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21593). All of your documentation changes will be reflected on that endpoint.",
"Thanks for suggestion ",
"Yes, if you could replace `# Metrics` with `# region Metrics` instead that would be great!\r\n\r\nAlternatively I could just ditch all of these region tags, since I think I'm the only person who likes them.",
"@Rocketknight1 I will definitely replace all the metrics with region metrics but any reasons of doing that ?? I am a Beginner so just wanted to understand .",
"@Aniketsingh12 Using region tags allows IDEs like PyCharm and VS Code ([with an addon](https://marketplace.visualstudio.com/items?itemName=maptz.regionfolder)) to fold the areas of code inside the `# region` and `# endregion`, which can make the code easier to read in a long script. They don't serve any function other than that!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,676
| 1,679
| 1,679
|
NONE
| null |
Endregion was not causing error as region was not recognizing.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21593/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21593",
"html_url": "https://github.com/huggingface/transformers/pull/21593",
"diff_url": "https://github.com/huggingface/transformers/pull/21593.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21593.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21592
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21592/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21592/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21592/events
|
https://github.com/huggingface/transformers/pull/21592
| 1,581,726,778
|
PR_kwDOCUB6oc5J0SMM
| 21,592
|
Removes duplicate computations in DETR post processing
|
{
"login": "eclique",
"id": 12473730,
"node_id": "MDQ6VXNlcjEyNDczNzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/12473730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eclique",
"html_url": "https://github.com/eclique",
"followers_url": "https://api.github.com/users/eclique/followers",
"following_url": "https://api.github.com/users/eclique/following{/other_user}",
"gists_url": "https://api.github.com/users/eclique/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eclique/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eclique/subscriptions",
"organizations_url": "https://api.github.com/users/eclique/orgs",
"repos_url": "https://api.github.com/users/eclique/repos",
"events_url": "https://api.github.com/users/eclique/events{/privacy}",
"received_events_url": "https://api.github.com/users/eclique/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Removes duplicate softmax computation, change variable names accordingly.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21592/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21592",
"html_url": "https://github.com/huggingface/transformers/pull/21592",
"diff_url": "https://github.com/huggingface/transformers/pull/21592.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21592.patch",
"merged_at": 1676397602000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21591
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21591/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21591/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21591/events
|
https://github.com/huggingface/transformers/issues/21591
| 1,581,557,499
|
I_kwDOCUB6oc5eRKb7
| 21,591
|
how to fine tune with the gpt2?what's the dataset format with my own data
|
{
"login": "ucas010",
"id": 50656998,
"node_id": "MDQ6VXNlcjUwNjU2OTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/50656998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ucas010",
"html_url": "https://github.com/ucas010",
"followers_url": "https://api.github.com/users/ucas010/followers",
"following_url": "https://api.github.com/users/ucas010/following{/other_user}",
"gists_url": "https://api.github.com/users/ucas010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ucas010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ucas010/subscriptions",
"organizations_url": "https://api.github.com/users/ucas010/orgs",
"repos_url": "https://api.github.com/users/ucas010/repos",
"events_url": "https://api.github.com/users/ucas010/events{/privacy}",
"received_events_url": "https://api.github.com/users/ucas010/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep the issues for bugs in the library and feature requests only.",
"@ucas010 The script you linked is not for training/fine-tuing however. It's only for generation with a provided prompt.",
"@ydshieh year๏ผI have tried with Chinese๏ผbut the result is not good , you see \r\n02/14/2023 09:52:08 - INFO - __main__ - Namespace(model_type='gpt2', model_name_or_path='gpt2', prompt='', length=20, stop_token=None, temperature=1.0, repetition_penalty=1.0, k=0, p=0.9, prefix='', padding_text='', xlm_language='', seed=42, no_cuda=False, num_return_sequences=1, fp16=False, device=device(type='cuda'), n_gpu=4)\r\nModel prompt >>> ๅไบ่ดฃไปป๏ผๆฐไบ่ดฃไปป๏ผๅไบ่ฏ่ฎผๆกไปถ๏ผ็ฎๆ็จๅบ๏ผไบบๆฐๆณ้ข\r\nThe attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\n=== GENERATED SEQUENCE 1 ===\r\nๅไบ่ดฃไปป๏ผๆฐไบ่ดฃไปป๏ผๅไบ่ฏ่ฎผๆกไปถ๏ผ็ฎๆ็จๅบ๏ผไบบๆฐๆณ้ข๏ผๅณ๏ผๆๆๆฐ่ฆไน๏ฟฝ",
"@ucas010 , as sgugger mentioned, this kind of question is for [Hugging Face Forums](https://discuss.huggingface.co/).\r\nThe GitHub repository is only for issues, bugs or features in the library.\r\n\r\nI am going to close this issue. But FYI: GPT-2 is only trained on English corpus, and you can't use it for other languages (not even with just fine-tuning).\r\n\r\n"
] | 1,676
| 1,676
| 1,676
|
NONE
| null |
### System Info
transformers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.26.0.dev0
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @Rocketknight1
@gmftbyGMFTBY
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation
python run_generation.py \
--model_type=gpt2 \
--model_name_or_path=gpt2
???
--train_file ?
--valid_file?
### Expected behavior
fine tuned gpt2 will generate chinese well
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21591/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21590
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21590/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21590/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21590/events
|
https://github.com/huggingface/transformers/issues/21590
| 1,581,521,863
|
I_kwDOCUB6oc5eRBvH
| 21,590
|
Project_Test01
|
{
"login": "RutujaTalekar",
"id": 49370316,
"node_id": "MDQ6VXNlcjQ5MzcwMzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/49370316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RutujaTalekar",
"html_url": "https://github.com/RutujaTalekar",
"followers_url": "https://api.github.com/users/RutujaTalekar/followers",
"following_url": "https://api.github.com/users/RutujaTalekar/following{/other_user}",
"gists_url": "https://api.github.com/users/RutujaTalekar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RutujaTalekar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RutujaTalekar/subscriptions",
"organizations_url": "https://api.github.com/users/RutujaTalekar/orgs",
"repos_url": "https://api.github.com/users/RutujaTalekar/repos",
"events_url": "https://api.github.com/users/RutujaTalekar/events{/privacy}",
"received_events_url": "https://api.github.com/users/RutujaTalekar/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[] | 1,676
| 1,676
| null |
NONE
| null |
### Model description
This model takes parameters - training data(optional) and rubrics(list of strings) to train. It gives us zero shot results for scoring logical reasoning answers provided by the students.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://arxiv.org/pdf/2301.08771.pdf
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21590/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21589
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21589/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21589/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21589/events
|
https://github.com/huggingface/transformers/pull/21589
| 1,581,362,016
|
PR_kwDOCUB6oc5JzGE-
| 21,589
|
[i18n-fr] Translate quicktour page to French
|
{
"login": "NoB0",
"id": 28621493,
"node_id": "MDQ6VXNlcjI4NjIxNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/28621493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NoB0",
"html_url": "https://github.com/NoB0",
"followers_url": "https://api.github.com/users/NoB0/followers",
"following_url": "https://api.github.com/users/NoB0/following{/other_user}",
"gists_url": "https://api.github.com/users/NoB0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NoB0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NoB0/subscriptions",
"organizations_url": "https://api.github.com/users/NoB0/orgs",
"repos_url": "https://api.github.com/users/NoB0/repos",
"events_url": "https://api.github.com/users/NoB0/events{/privacy}",
"received_events_url": "https://api.github.com/users/NoB0/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Translated the `quicktour.mdx` file of the documentation to French.
Part of #21456
Thank you in advance for your review.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, could you review this PR?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21589/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21589",
"html_url": "https://github.com/huggingface/transformers/pull/21589",
"diff_url": "https://github.com/huggingface/transformers/pull/21589.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21589.patch",
"merged_at": 1676311532000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21588
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21588/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21588/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21588/events
|
https://github.com/huggingface/transformers/pull/21588
| 1,581,346,056
|
PR_kwDOCUB6oc5JzC_A
| 21,588
|
Add missing arguemtn to run_clip.py
|
{
"login": "WarrenGreen",
"id": 1166181,
"node_id": "MDQ6VXNlcjExNjYxODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1166181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WarrenGreen",
"html_url": "https://github.com/WarrenGreen",
"followers_url": "https://api.github.com/users/WarrenGreen/followers",
"following_url": "https://api.github.com/users/WarrenGreen/following{/other_user}",
"gists_url": "https://api.github.com/users/WarrenGreen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WarrenGreen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WarrenGreen/subscriptions",
"organizations_url": "https://api.github.com/users/WarrenGreen/orgs",
"repos_url": "https://api.github.com/users/WarrenGreen/repos",
"events_url": "https://api.github.com/users/WarrenGreen/events{/privacy}",
"received_events_url": "https://api.github.com/users/WarrenGreen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,676
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# Missing test_file argument in run_clip.py
## Bug Experience
- Including the test_file parameter during runtime causes HfArguments to throw error for extra parameters.
- Not including the test_file parameter during runtime causes NPE [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py#L299)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21588/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21588",
"html_url": "https://github.com/huggingface/transformers/pull/21588",
"diff_url": "https://github.com/huggingface/transformers/pull/21588.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21588.patch",
"merged_at": 1676302044000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21587
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21587/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21587/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21587/events
|
https://github.com/huggingface/transformers/issues/21587
| 1,581,344,870
|
I_kwDOCUB6oc5eQWhm
| 21,587
|
[Whisper] ASR pipeline ignores/ rejects generate_kwargs on inference
|
{
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You should use the main branch! \r\nThe following \r\n```python \r\n def test_simple_whisper_translation(self):\r\n speech_recognizer = pipeline(\r\n task=\"automatic-speech-recognition\",\r\n model=\"openai/whisper-large\",\r\n framework=\"pt\",\r\n )\r\n ds = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\").sort(\"id\")\r\n filename = ds[40][\"file\"]\r\n output = speech_recognizer(filename)\r\n self.assertEqual(output, {\"text\": \" A man said to the universe, Sir, I exist.\"})\r\n\r\n model = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-large\")\r\n tokenizer = AutoTokenizer.from_pretrained(\"openai/whisper-large\")\r\n feature_extractor = AutoFeatureExtractor.from_pretrained(\"openai/whisper-large\")\r\n\r\n speech_recognizer_2 = AutomaticSpeechRecognitionPipeline(\r\n model=model, tokenizer=tokenizer, feature_extractor=feature_extractor\r\n )\r\n output_2 = speech_recognizer_2(filename)\r\n self.assertEqual(output, output_2)\r\n\r\n # either use generate_kwargs or set the model's generation_config\r\n # model.generation_config.task = \"transcribe\"\r\n # model.generation_config.lang = \"<|it|>\"\r\n speech_translator = AutomaticSpeechRecognitionPipeline(\r\n model=model,\r\n tokenizer=tokenizer,\r\n feature_extractor=feature_extractor,\r\n generate_kwargs={\"task\": \"transcribe\", \"language\": \"<|it|>\"},\r\n )\r\n output_3 = speech_translator(filename)\r\n self.assertEqual(output_3, {\"text\": \" Un uomo ha detto all'universo, Sir, esiste.\"})\r\n```\r\nis used in our testing suit and can confirm that it works.",
"In my case, your script outputs the folliwing: \r\n```python \r\nโ /home/arthur_huggingface_co/transformers/src/transformers/models/whisper/modeling_whisper.py:134 โ\r\nโ 5 in generate โ\r\nโ โ\r\nโ 1342 โ โ โ\r\nโ 1343 โ โ if hasattr(generation_config, \"is_multilingual\") and generation_config.is_multil โ\r\nโ 1344 โ โ โ if hasattr(generation_config, \"language\"): โ\r\nโ โฑ 1345 โ โ โ โ forced_decoder_ids.append((1, generation_config.lang_to_id[generation_co โ\r\nโ 1346 โ โ โ else: โ\r\nโ 1347 โ โ โ โ forced_decoder_ids.append((1, None)) โ\r\nโ 1348 โ\r\nโฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ\r\nKeyError: 'german'\r\n```\r\nBecause \r\n```python \r\nIn [7]: pipe.model.generation_config.language\r\nOut[7]: 'german'\r\n```\r\nIt should be `<|XX|>`. There was an issue opened to make it so that the actual language code is used here, but we have not adressed it yet. ",
"Ha! I am invoking the `pipeline` the same way as you are in your first comment. The only difference is that on `4.26.1` it doesn't work and neither does it raise any errors.\r\nIt works perfectly on `main`: https://github.com/Vaibhavs10/scratchpad/blob/main/Whisper_return_timestamp_bug_report_w_main.ipynb\r\n\r\nDo we have any ETA on when this would make it to a release?",
"Think it was part of the patch release see [here ](https://github.com/huggingface/transformers/releases/tag/v4.26.1)"
] | 1,676
| 1,676
| 1,676
|
MEMBER
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
pipeline: @Narsil
whisper/ speech: @ArthurZucker / @sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The failure mode is specific to `Whisper` model in the ASR pipeline. Potentially introduced after the introduction of the `return_timestamps` argument.
Repro can be found in the [colab here](https://colab.research.google.com/drive/1koqO7Tjos5vGlFBCugGeCRKJm5gf2hfj?usp=sharing) or on [GitHub here](https://github.com/Vaibhavs10/scratchpad/blob/main/Whisper_return_timestamp_bug_report.ipynb)
Essentially, when the pipeline is invoked and inferred with:
```python
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-small", generate_kwargs={"task": "transcribe", "language": "german"})
pipe(test_sample, chunk_length_s=30)
```
It rejects to accept the `generate_kwargs` and throws an error:
```
ValueError: The following `model_kwargs` are not used by the model: ['task', 'language'] (note: typos in the generate arguments will also show up in this list)
```
The interesting bit, if I run inference whilst explicitly setting `return_timestamps=True` it does work however, 'translates' instead of `transcribe`.
```python
pipe(test_sample, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0])
```
I went more step further and tried with loading a vanilla pipeline and ran the pipeline by explicitly setting the `forced_decoder_ids` and it worked well:
```python
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-small")
pipe.model.config.forced_decoder_ids = (
pipe.tokenizer.get_decoder_prompt_ids(
language="de", task="transcribe"
)
)
pipe(test_sample, chunk_length_s=30)
```
However, if I now pass the `return_timestamps=True` it again rejects the original decoder_ids and `translates` in English:
```python
pipe(test_sample, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0])
```
As said above, the [colab](https://colab.research.google.com/drive/1koqO7Tjos5vGlFBCugGeCRKJm5gf2hfj?usp=sharing) or on [GitHub](https://github.com/Vaibhavs10/scratchpad/blob/main/Whisper_return_timestamp_bug_report.ipynb) has a proper repro, do give it a look.
### Expected behavior
The expected behaviour of the pipeline would be to respect the `generate_kwargs` and throw potentially a more meaningful error message.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21587/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/21587/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.