url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/20584
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20584/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20584/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20584/events
|
https://github.com/huggingface/transformers/pull/20584
| 1,476,201,023
|
PR_kwDOCUB6oc5EUsNl
| 20,584
|
Fix torch device issue
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
The fixes in #20304 is changed somehow to the wrong places in #20160, and we got the torch device issues.
This PR fixes this device issue - just put `to` in the correct places.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20584/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20584",
"html_url": "https://github.com/huggingface/transformers/pull/20584",
"diff_url": "https://github.com/huggingface/transformers/pull/20584.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20584.patch",
"merged_at": 1670245054000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20583
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20583/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20583/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20583/events
|
https://github.com/huggingface/transformers/issues/20583
| 1,475,749,688
|
I_kwDOCUB6oc5X9ic4
| 20,583
|
AttributeError: 'DataParallel' object has no attribute 'model'
|
{
"login": "huynhhoanghuy",
"id": 32119423,
"node_id": "MDQ6VXNlcjMyMTE5NDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/32119423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huynhhoanghuy",
"html_url": "https://github.com/huynhhoanghuy",
"followers_url": "https://api.github.com/users/huynhhoanghuy/followers",
"following_url": "https://api.github.com/users/huynhhoanghuy/following{/other_user}",
"gists_url": "https://api.github.com/users/huynhhoanghuy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huynhhoanghuy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huynhhoanghuy/subscriptions",
"organizations_url": "https://api.github.com/users/huynhhoanghuy/orgs",
"repos_url": "https://api.github.com/users/huynhhoanghuy/repos",
"events_url": "https://api.github.com/users/huynhhoanghuy/events{/privacy}",
"received_events_url": "https://api.github.com/users/huynhhoanghuy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"contact me, I fixed it ",
"Hi, @huynhhoanghuy. I think that clrcmd trainer is trying to access `model.model` while your model is wrapped into DataParallel, hence there is no `.model` attribute.\r\nSee addressed [issue](https://github.com/sh0416/clrcmd/issues/2) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### System Info
System Info
- torch==1.8.1+cu101
- transformers==4.10.1
- Python 3.8
- "Ubuntu 18.04.6 LTS"
I am training parallel GPUs and not using pretrained weight. However, during training, I got this issue and break training:
```python
15%|ββ | 246/1617 [09:01<48:36, 2.13s/it]
15%|ββ | 247/1617 [09:03<48:21, 2.12s/it]
15%|ββ | 248/1617 [09:05<48:20, 2.12s/it]
15%|ββ | 249/1617 [09:07<48:17, 2.12s/it]
15%|ββ | 250/1617 [09:10<48:35, 2.13s/it]***** Running Evaluation *****
Num examples = 1500
Batch size = 512
{'loss': 1.2497, 'learning_rate': 4.941249226963513e-05, 'epoch': 0.04}
{'loss': 0.6803, 'learning_rate': 4.879406307977737e-05, 'epoch': 0.07}
{'loss': 0.6134, 'learning_rate': 4.817563388991961e-05, 'epoch': 0.11}
{'loss': 0.5777, 'learning_rate': 4.7557204700061845e-05, 'epoch': 0.15}
{'loss': 0.5626, 'learning_rate': 4.6938775510204086e-05, 'epoch': 0.19}
{'loss': 0.5413, 'learning_rate': 4.6320346320346326e-05, 'epoch': 0.22}
{'loss': 0.5249, 'learning_rate': 4.570191713048856e-05, 'epoch': 0.26}
{'loss': 0.5015, 'learning_rate': 4.50834879406308e-05, 'epoch': 0.3}
{'loss': 0.5017, 'learning_rate': 4.4465058750773034e-05, 'epoch': 0.33}
{'loss': 0.4924, 'learning_rate': 4.3846629560915274e-05, 'epoch': 0.37}
{'loss': 0.4831, 'learning_rate': 4.3228200371057515e-05, 'epoch': 0.41}
{'loss': 0.4695, 'learning_rate': 4.2609771181199755e-05, 'epoch': 0.45}
Traceback (most recent call last):
File "examples/run_train.py", line 105, in <module>
main()
File "examples/run_train.py", line 99, in main
train_result = trainer.train()
File "/root/data/huyhuynh/clrcmd-master/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1340, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/data/huyhuynh/clrcmd-master/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1445, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/data/huyhuynh/clrcmd-master/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2051, in evaluate
output = eval_loop(
File "/root/data/huyhuynh/clrcmd-master/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2223, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/root/data/huyhuynh/clrcmd-master/src/clrcmd/trainer.py", line 29, in prediction_step
score = model.model(inputs1, inputs2)
File "/root/data/huyhuynh/clrcmd-master/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 947, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'DataParallel' object has no attribute 'model'
15%|ββ | 250/1617 [09:10<50:11, 2.20s/it]
```
This is training code:
```python
import argparse
import logging
import os
import uuid
from transformers import TrainingArguments, set_seed
from clrcmd.data.dataset import (
ContrastiveLearningCollator,
NLIContrastiveLearningDataset,
STSBenchmarkDataset,
)
from clrcmd.data.sts import load_stsb_dev
from clrcmd.models import create_contrastive_learning, create_tokenizer
from clrcmd.trainer import STSTrainer, compute_metrics
import torch
logger = logging.getLogger(__name__)
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
# fmt: off
parser.add_argument("--data-dir", type=str, help="Data directory", default="data")
parser.add_argument("--model", type=str, help="Model", default="bert-cls",
choices=["bert-cls", "bert-avg", "bert-rcmd", "roberta-cls", "roberta-avg", "roberta-rcmd"])
parser.add_argument("--output-dir", type=str, help="Output directory", default="ckpt")
parser.add_argument("--temp", type=float, help="Softmax temperature", default=0.05)
parser.add_argument("--seed", type=int, help="Seed", default=0)
# fmt: on
def main():
args = parser.parse_args()
experiment_name = f"{args.model}-{uuid.uuid4()}"
training_args = TrainingArguments(
os.path.join(args.output_dir, experiment_name),
per_device_train_batch_size=128,
per_device_eval_batch_size=128,
learning_rate=5e-5,
num_train_epochs=3,
fp16=True,
logging_strategy="steps",
logging_steps=20,
evaluation_strategy="steps",
eval_steps=250,
save_strategy="steps",
save_steps=250,
metric_for_best_model="eval_spearman",
load_best_model_at_end=True,
greater_is_better=True,
save_total_limit=1,
seed=args.seed,
)
if training_args.local_rank == -1 or training_args.local_rank == 0:
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(message)s",
filename=f"log/train-{experiment_name}.log",
)
logger.info("Hyperparameters")
for k, v in vars(args).items():
logger.info(f"{k} = {v}")
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, "
f"device: {training_args.device}, "
f"n_gpu: {training_args.n_gpu}, "
f"distributed training: {bool(training_args.local_rank != -1)}, "
f"16-bits training: {training_args.fp16} "
)
# Set seed before initializing model.
set_seed(training_args.seed)
# Load pretrained model and tokenizer
tokenizer = create_tokenizer(args.model)
model = create_contrastive_learning(args.model, args.temp)
### model = torch.nn.DataParallel(model) --> tried but not fix ...
model.train()
train_dataset = NLIContrastiveLearningDataset(
os.path.join(args.data_dir, "nli_for_simcse.csv"), tokenizer
)
eval_dataset = STSBenchmarkDataset(
load_stsb_dev(os.path.join(args.data_dir, "STS", "STSBenchmark"))["dev"], tokenizer
)
trainer = STSTrainer(
model=model,
data_collator=ContrastiveLearningCollator(),
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
train_result = trainer.train()
logger.info(train_result)
trainer.module.save_model(os.path.join(training_args.output_dir, "checkpoint-best"))
if __name__ == "__main__":
main()
```
I searched problem, but I didn't find any solution for this.
Could you help me?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Wrap the model with train_result = trainer.train()
### Expected behavior
Can solve issue
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20583/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20582
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20582/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20582/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20582/events
|
https://github.com/huggingface/transformers/issues/20582
| 1,475,294,678
|
I_kwDOCUB6oc5X7zXW
| 20,582
|
ValueError: Tokenizer class `NllbTokenizer` does not exist or is not currently imported when using NLLB (On Paperspace)
|
{
"login": "svngoku",
"id": 32180057,
"node_id": "MDQ6VXNlcjMyMTgwMDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/32180057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svngoku",
"html_url": "https://github.com/svngoku",
"followers_url": "https://api.github.com/users/svngoku/followers",
"following_url": "https://api.github.com/users/svngoku/following{/other_user}",
"gists_url": "https://api.github.com/users/svngoku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/svngoku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/svngoku/subscriptions",
"organizations_url": "https://api.github.com/users/svngoku/orgs",
"repos_url": "https://api.github.com/users/svngoku/repos",
"events_url": "https://api.github.com/users/svngoku/events{/privacy}",
"received_events_url": "https://api.github.com/users/svngoku/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Bug resolved ! \r\nI relaunch my instance many times and run this command `!pip3 install git+https://github.com/huggingface/transformers.git`",
"I have the same problem. help!!!"
] | 1,670
| 1,680
| 1,670
|
NONE
| null |
### System Info
Hello,
When i use the code below
```py
...
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
...
```
Models : `nllb-distilled-600M`
I got an error in my notebook instance (on `paperspace`) and I thought the problem was with the version of huggingface `(4.26.0.dev0)` even if I was on the right one it still doesn't work.
π€
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
...
def load_models():
# build model and tokenizer
model_name_dict = {'nllb-distilled-600M': 'facebook/nllb-200-distilled-600M',
#'nllb-1.3B': 'facebook/nllb-200-1.3B',
#'nllb-distilled-1.3B': 'facebook/nllb-200-distilled-1.3B',
#'nllb-3.3B': 'facebook/nllb-200-3.3B',
}
model_dict = {}
for call_name, real_name in model_name_dict.items():
print('\tLoading model: %s' % call_name)
model = AutoModelForSeq2SeqLM.from_pretrained(real_name)
tokenizer = AutoTokenizer.from_pretrained(real_name)
model_dict[call_name+'_model'] = model
model_dict[call_name+'_tokenizer'] = tokenizer
return model_dict
...
```
### Expected behavior
See my model working as except on the Gradio space
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20582/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20581
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20581/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20581/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20581/events
|
https://github.com/huggingface/transformers/issues/20581
| 1,475,256,517
|
I_kwDOCUB6oc5X7qDF
| 20,581
|
Sensible default for Trainer's dataloader_num_workers argument
|
{
"login": "kmewhort",
"id": 402654,
"node_id": "MDQ6VXNlcjQwMjY1NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/402654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kmewhort",
"html_url": "https://github.com/kmewhort",
"followers_url": "https://api.github.com/users/kmewhort/followers",
"following_url": "https://api.github.com/users/kmewhort/following{/other_user}",
"gists_url": "https://api.github.com/users/kmewhort/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kmewhort/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kmewhort/subscriptions",
"organizations_url": "https://api.github.com/users/kmewhort/orgs",
"repos_url": "https://api.github.com/users/kmewhort/repos",
"events_url": "https://api.github.com/users/kmewhort/events{/privacy}",
"received_events_url": "https://api.github.com/users/kmewhort/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It's hard to have a nice default that would work everywhere: for NLP tasks you wouldn't need this since the preprocessing is fast. How about doing something in the vision examples?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### Feature request
As a relative beginner with Transformers and ML, it took me quite a bit of performance analysis and fiddling to figure out why my GPU was being vastly underutilized in training on image classification. I finally figured out that the bottleneck was dataloaders (as typical for image tasks, it has a few image transformations) and I got a 10X performance increase by setting `dataloader_num_workers` to 16.
Could this have a more a higher default to avoid this gotcha? Maybe it could default to something like half the number of available CPUs?
### Motivation
It's frustrating for a beginner to have training times super slow because of an unset parameter. It'd be great for the defaults to work well out-of-the-box.
### Your contribution
Happy to submit a PR if the idea is greenlit.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20581/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20580
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20580/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20580/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20580/events
|
https://github.com/huggingface/transformers/issues/20580
| 1,475,205,608
|
I_kwDOCUB6oc5X7dno
| 20,580
|
VideoMAE with `num_channels!=3` needs a small fix
|
{
"login": "layjain",
"id": 43300660,
"node_id": "MDQ6VXNlcjQzMzAwNjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/43300660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/layjain",
"html_url": "https://github.com/layjain",
"followers_url": "https://api.github.com/users/layjain/followers",
"following_url": "https://api.github.com/users/layjain/following{/other_user}",
"gists_url": "https://api.github.com/users/layjain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/layjain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/layjain/subscriptions",
"organizations_url": "https://api.github.com/users/layjain/orgs",
"repos_url": "https://api.github.com/users/layjain/repos",
"events_url": "https://api.github.com/users/layjain/events{/privacy}",
"received_events_url": "https://api.github.com/users/layjain/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThis is related to #19913. Would be great to open a PR! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
## Issue
The VideoMAE model doesn't work for non-RGB videos (when `num_channels!=3`). I believe this is caused by the harcoded image-net means and stds in the following lines:
https://github.com/huggingface/transformers/blob/d51e7c7e8265d69db506828dce77eb4ef9b72157/src/transformers/models/videomae/modeling_videomae.py#L824L826
## Code to Reproduce
```
import torch
from transformers import VideoMAEConfig, VideoMAEForPreTraining
NUM_CHANNELS = 1
config = VideoMAEConfig(num_channels=NUM_CHANNELS)
model = VideoMAEForPreTraining(config)
pixel_values = torch.rand(1, 16, NUM_CHANNELS, 224, 224)
num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2
seq_length = (model.config.num_frames // model.config.tubelet_size) * num_patches_per_frame
bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss = outputs.loss
```
This produces the following error: ```File ".../lib/python3.10/site-packages/torch/functional.py", line 74, in broadcast_tensors
return _VF.broadcast_tensors(tensors) # type: ignore[attr-defined]
RuntimeError: The size of tensor a (512) must match the size of tensor b (1536) at non-singleton dimension 2```
Setting NUM_CHANNELS = 3 works fine.
## Potential Fix
Since we don't have a `_DEFAULT_MEAN/STD` for non-RGB images, we can just replace `frames=pixel_values` and disallow `norm_pix_loss=False` if `num_channels!=3`. With this modification, the code works on my machine. I am willing to fix and submit a PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20580/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20579
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20579/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20579/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20579/events
|
https://github.com/huggingface/transformers/issues/20579
| 1,475,105,602
|
I_kwDOCUB6oc5X7FNC
| 20,579
|
fail to import 'microsoft/swin-tiny-patch4-window7-224' in AutoConfig
|
{
"login": "XZhang97666",
"id": 91291808,
"node_id": "MDQ6VXNlcjkxMjkxODA4",
"avatar_url": "https://avatars.githubusercontent.com/u/91291808?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XZhang97666",
"html_url": "https://github.com/XZhang97666",
"followers_url": "https://api.github.com/users/XZhang97666/followers",
"following_url": "https://api.github.com/users/XZhang97666/following{/other_user}",
"gists_url": "https://api.github.com/users/XZhang97666/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XZhang97666/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XZhang97666/subscriptions",
"organizations_url": "https://api.github.com/users/XZhang97666/orgs",
"repos_url": "https://api.github.com/users/XZhang97666/repos",
"events_url": "https://api.github.com/users/XZhang97666/events{/privacy}",
"received_events_url": "https://api.github.com/users/XZhang97666/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @XZhang97666. It seems that config.json file is in the right place(https://huggingface.co/microsoft/swin-tiny-patch4-window7-224/blob/main/config.json). \r\nCould you please tell what version of `transformers` you're using?\r\nAlso have you tried to run the snippet code that at https://huggingface.co/microsoft/swin-tiny-patch4-window7-224?",
"> Hi, @XZhang97666. It seems that config.json file is in the right place(https://huggingface.co/microsoft/swin-tiny-patch4-window7-224/blob/main/config.json). Could you please tell what version of `transformers` you're using? Also have you tried to run the snippet code that at https://huggingface.co/microsoft/swin-tiny-patch4-window7-224?\r\n\r\nYes. I tried https://huggingface.co/microsoft/swin-tiny-patch4-window7-224, which works. However, another task does not even with the same transformers version (4.22.2).",
"> Hi, @XZhang97666. It seems that config.json file is in the right place(https://huggingface.co/microsoft/swin-tiny-patch4-window7-224/blob/main/config.json). Could you please tell what version of `transformers` you're using? Also have you tried to run the snippet code that at https://huggingface.co/microsoft/swin-tiny-patch4-window7-224?\r\n\r\nI found the issue. My task generate a local folder called \"microsoft/...\"",
"Closing this issue as it seems resolved, feel free to reopen."
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
### System Info
OSError: microsoft/swin-tiny-patch4-window7-224 does not appear to have a file named config.json. Checkout 'https://huggingface.co/microsoft/swin-tiny-patch4-window7-224/None' for available files
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
NA
### Expected behavior
import 'microsoft/swin-tiny-patch4-window7-224' successful
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20579/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20578
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20578/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20578/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20578/events
|
https://github.com/huggingface/transformers/issues/20578
| 1,474,738,163
|
I_kwDOCUB6oc5X5rfz
| 20,578
|
Missing support for token sampling in XLMRobertaTokenizer (sentencepiece)
|
{
"login": "talbaumel",
"id": 6032899,
"node_id": "MDQ6VXNlcjYwMzI4OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6032899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talbaumel",
"html_url": "https://github.com/talbaumel",
"followers_url": "https://api.github.com/users/talbaumel/followers",
"following_url": "https://api.github.com/users/talbaumel/following{/other_user}",
"gists_url": "https://api.github.com/users/talbaumel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talbaumel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talbaumel/subscriptions",
"organizations_url": "https://api.github.com/users/talbaumel/orgs",
"repos_url": "https://api.github.com/users/talbaumel/repos",
"events_url": "https://api.github.com/users/talbaumel/events{/privacy}",
"received_events_url": "https://api.github.com/users/talbaumel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"+1",
"+1",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### Feature request
Hi all, token sampling is supported by the sentencepiece library, but the kwargs required to enable it are blocked by the wrapper (`_tokenize` as no `**kwargs` param)
This simple fix will enable support for token sampling π
### Motivation
Token sampling is awesome, it will enable learning a more robust model π
### Your contribution
In `XLMRobertaTokenizer.py`
```
def _tokenize(self, text, **kwargs):
enable_sampling = kwargs.get("enable_sampling", False)
if enable_sampling:
return self.sp_model.EncodeAsPieces(text)
else:
return self.sp_model.sample_encode_as_pieces(text, nbest_size=kwargs['nbest_size'], alpha=kwargs['alpha'])
```
And in `tokenization_utils_base.py`:
Line 318 --> `def split_on_tokens(tok_list, text, **kwargs):`
Line 338 --> `self._tokenize(token) if token not in self.unique_no_split_tokens else [token]`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20578/reactions",
"total_count": 4,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/huggingface/transformers/issues/20578/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20577
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20577/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20577/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20577/events
|
https://github.com/huggingface/transformers/pull/20577
| 1,474,584,853
|
PR_kwDOCUB6oc5EPAu_
| 20,577
|
Add OneFormer Model
|
{
"login": "praeclarumjj3",
"id": 54928629,
"node_id": "MDQ6VXNlcjU0OTI4NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/54928629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/praeclarumjj3",
"html_url": "https://github.com/praeclarumjj3",
"followers_url": "https://api.github.com/users/praeclarumjj3/followers",
"following_url": "https://api.github.com/users/praeclarumjj3/following{/other_user}",
"gists_url": "https://api.github.com/users/praeclarumjj3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/praeclarumjj3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/praeclarumjj3/subscriptions",
"organizations_url": "https://api.github.com/users/praeclarumjj3/orgs",
"repos_url": "https://api.github.com/users/praeclarumjj3/repos",
"events_url": "https://api.github.com/users/praeclarumjj3/events{/privacy}",
"received_events_url": "https://api.github.com/users/praeclarumjj3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @praeclarumjj3, thanks a lot for your PR. It's awesome OneFormer will be available in the library (we already have [MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer) and plan to add Mask2Former as well).\r\n\r\nI've got 2 main points for now:\r\n\r\n## Backbones\r\n\r\nHowever, there's no need to implement backbones from scratch again, as we've just added the `AutoBackbone` class, which allows to use frameworks like DETR, Mask R-CNN, and also OneFormer with all vision backbones available in the library. The idea is to add an `xxxBackbone` class to each vision model, see for instance [here](https://github.com/huggingface/transformers/blob/699e90437f984d69ad3c9b891dd2e9d0fc2cffe4/src/transformers/models/resnet/modeling_resnet.py#L434) for ResNet. \r\n\r\nNext, the framework (like OneFormer) can use the `AutoBackbone` class as shown [here](https://github.com/huggingface/transformers/blob/699e90437f984d69ad3c9b891dd2e9d0fc2cffe4/src/transformers/models/maskformer/modeling_maskformer.py#L1385) for MaskFormer. This allows to mix-and-match backbones with a given framework.\r\n\r\nThe plan is to next add `SwinBackbone`, `ConvNextBackbone`, as well as `NatBackbone` and `DinatBackbone` => which will make sure OneFormer can use them.\r\n\r\n## Auto class\r\n\r\nI doubt there's a need for an `AutoModelForUniversalSegmentation` class, as OneFormer is probably the only class which will ever be supported by it. It'd be great to make OneFormer work with our existing image segmentation pipeline (cc @Narsil). This pipeline supports instance, semantic and panoptic segmentation, and uses the appropriate postprocess method.\r\n\r\nWill soon do a more in depth review! Thanks already for all your work.\r\n\r\n",
"> @praateekmahajan thank you for working on this! Seems like you already made very good progress, my main comments are:\r\n> \r\n> * As Niels suggested, you can create and/or leverage the XXXBackbone classes. The SwinBackbone PR will be merged shortly so you can just focus on the DinatBackbone class.\r\n> * The current code is CUDA dependent (correct me if I'm wrong). I took a look at the paper and the Pixel Decoder seems very similar to that of Mask2Former (also uses multi-scale deformable attention). Perhaps you could use their PyTorch implementation to get rid of the CUDA scripts, here is the [relevant Mask2Former code.](https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/pixel_decoder/msdeformattn.py)\r\n> * I think having a OneFormerForUniversalSegmentation class makes sense but we can add it to auto mapping for instance segmentation instead of creating a new mapping for simplicity.\r\n> \r\n> I will do a detailed review once the custom CUDA scripts are cleaned up.\r\n> \r\n> Thanks again :)\r\n\r\nThanks for the suggestions @alaradirik! I will work on using AutoBackbone classes everywhere. About the CUDA code, sure, the PyTorch code is already [there](https://github.com/praeclarumjj3/transformers/blob/cb9cba1bf6d0249401ffacfbe9eca54ba1c384c8/src/transformers/models/oneformer/modeling_oneformer.py#L1228), we just check for the presence of GPU. I will clean the CUDA files. Also, I believe you tagged the wrong person by mistake π.\r\n\r\n> I think having a OneFormerForUniversalSegmentation class makes sense but we can add it to auto mapping for instance segmentation instead of creating a new mapping for simplicity.\r\n\r\nI still think it's better to create a different `AutoMapping` class for OneFormer as it belongs to a whole new class of architecture which uses a single model for all three tasks. Is it possible for us to keep it? Hopefully, there will be follow-up works in the same direction as OneFormer's approach of training a single model.",
"> Thanks for the suggestions @alaradirik! I will work on using AutoBackbone classes everywhere. About the CUDA code, sure, the PyTorch code is already [there](https://github.com/praeclarumjj3/transformers/blob/cb9cba1bf6d0249401ffacfbe9eca54ba1c384c8/src/transformers/models/oneformer/modeling_oneformer.py#L1228),\r\n\r\nGreat, that makes things much easier then, and sorry about tagging the wrong person :)\r\n> \r\n> I still think it's better to create a different `AutoMapping` class for OneFormer as it belongs to a whole new class of architecture which uses a single model for all three tasks. Is it possible for us to keep it? Hopefully, there will be follow-up works in the same direction as OneFormer's approach of training a single model.\r\n\r\nMaskFormer and Mask2Former (in progress in another PR) also feature universal segmentation architectures and I agree that new research will likely leverage the same paradigm. In retrospect, creating an auto mapping for universal segmentation and adding MaskFormer and Mask2Former along with OneFormer might be better. @NielsRogge what do you think about this?\r\n\r\n",
"Hi @NielsRogge @alaradirik, I have the made all the suggested changes, please let me know if I missed anything. Only one thing remains: using Autobackbone for Dinat (will do after the PR for that is merged).\r\n\r\nAlso a reminder about merging this [PR](https://huggingface.co/datasets/huggingface/documentation-images/discussions/11) for documentation images :)\r\n\r\n## Changes after Review\r\n\r\n- [x] Replace Swin backbone file with AutoBackbone\r\n- [x] Replace Dinat backbone file with AutoBackbone\r\n- [x] Remove FeatureExtractor Class.\r\n- [x] Remove CUDA dependency code.\r\n- [x] Apply suggested changes to image segmentation `task_inputs` description.\r\n- [x] Remove dataset info files and use json files hosted on hf_hub instead.\r\n",
"@praeclarumjj3 Thanks for adding this model! β \r\n\r\n@praeclarumjj3 @NielsRogge @sgugger Yes, I think it would be better to add a `OneFormerProcessor` that contains both the tokenizer and image processor, similar to e.g. [OwlViT](https://github.com/huggingface/transformers/blob/94f8e21c7095430caa01272e16a367a421822e1c/src/transformers/models/owlvit/processing_owlvit.py#LL63C5-L63C5). In particular because the text processing is, as far as I can tell, independent of the processing of the images and it ensures accessing, loading and saving of the processing objects (tokenizer & image processor) is consistent across models. ",
"Hi @NielsRogge @alaradirik @amyeroberts, I have made all the requested changes and added a new `OneFormerProcessor` class.",
"Sorry I actually pushed on this PR, I didn't mean to:\r\n\r\nhttps://github.com/huggingface/transformers/pull/20851 (For some reason I could not create a PR on top of you PR)",
"This PR has become too massive to be merged safely. Could you split the model addition and the pipeline addition in two different PRs?",
"> This PR has become too massive to be merged safely. Could you split the model addition and the pipeline addition in two different PRs?\r\n\r\n@NielsRogge do you mind taking care of it ?\r\nLet's remove my commits from this branch and just ignore the pipeline, I will then rebase my own PR on top once this is merged.",
"Sure, I think @praeclarumjj3 can revert the pipeline commits since I don't have write access and then @sgugger can have a final review.",
"@NielsRogge, are you sure you don't have write access? If @Narsil managed to push on the branch I think we should all have write access. I think it would be nice to take care of this given that @praeclarumjj3 has already done a big amount of work on the PR :) \r\n\r\nThanks a lot!",
"@praeclarumjj3 opened a PR here to revert the pipeline updates: https://github.com/praeclarumjj3/transformers/pull/1",
"Thanks for all your work! Merging now.",
"Hi @praeclarumjj3 Thank you for adding this model! \r\n\r\nThere are a few examples in the docstrings failing the CI. For example, in `OneFormerForUniversalSegmentation.forward`\r\n```python\r\n >>> # you can pass them to feature_extractor for instance postprocessing\r\n >>> predicted_instance_map = feature_extractor.post_process_instance_segmentation(\r\n```\r\nthe `feature_extractor` is not defined.\r\n\r\nWould you like to make them fixed π ? If so, you can run the doctest like\r\n\r\n(if you have some change in the branch, stage them first)\r\n\r\n```python\r\npython3 utils/prepare_for_doc_test.py src docs\r\n```\r\nthen\r\n```bash\r\npython3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules src/transformers/models/gptj/modeling_gptj.py::transformers.models.gptj.modeling_gptj.GPTJForSequenceClassification.forward -sv --doctest-continue-on-failure --doctest-glob=\"*.mdx\"\r\n```\r\nand also\r\n```bash\r\npython3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules src/transformers/models/oneformer/modeling_oneformer.py::transformers.models.oneformer.modeling_oneformer.OneFormerModel.forward -sv --doctest-continue-on-failure --doctest-glob=\"*.mdx\"\r\n```\r\nAfter running the doctests, discard the change produced by `prepare_for_doc_test.py`, and see if you need more changes in the branch.\r\n\r\nDon't hesitate if you have further question, or if you could not find time on this at this moment (our team will fix it then) π Thank you\r\n",
"Hi @ydshieh, thanks for pointing this out to me. I apologize for not fixing the docstrings in the original PR (missed the changes after changing the code in an older commit). I have opened a new PR with the inconsistencies fixed: #21215.\r\n\r\nPlease take a look and let me know if something's still broken. And thanks for letting me know about the doctests! βπ» "
] | 1,670
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds the Code, Documentation, and Tests for OneFormer proposed in [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220). I have also opened a [PR](https://huggingface.co/datasets/huggingface/documentation-images/discussions/11) to add the documentation images to `huggingface/documentation-images`.
I have also made changes to the `ImageSegmentationPipeline` to accommodate OneFormer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20577/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20577/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20577",
"html_url": "https://github.com/huggingface/transformers/pull/20577",
"diff_url": "https://github.com/huggingface/transformers/pull/20577.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20577.patch",
"merged_at": 1674117068000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20576
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20576/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20576/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20576/events
|
https://github.com/huggingface/transformers/issues/20576
| 1,474,548,432
|
I_kwDOCUB6oc5X49LQ
| 20,576
|
Flan-T5 returns incomplete results
|
{
"login": "dyxohjl666",
"id": 109141168,
"node_id": "U_kgDOBoFcsA",
"avatar_url": "https://avatars.githubusercontent.com/u/109141168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dyxohjl666",
"html_url": "https://github.com/dyxohjl666",
"followers_url": "https://api.github.com/users/dyxohjl666/followers",
"following_url": "https://api.github.com/users/dyxohjl666/following{/other_user}",
"gists_url": "https://api.github.com/users/dyxohjl666/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dyxohjl666/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dyxohjl666/subscriptions",
"organizations_url": "https://api.github.com/users/dyxohjl666/orgs",
"repos_url": "https://api.github.com/users/dyxohjl666/repos",
"events_url": "https://api.github.com/users/dyxohjl666/events{/privacy}",
"received_events_url": "https://api.github.com/users/dyxohjl666/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You can set min_length and max_length in model.generate() to adjust the length of generation settings. By default, max_length is not very long.",
"It works! Thanks!\r\n\r\n> You can set min_length and max_length in model.generate() to adjust the length of generation settings. By default, max_length is not very long.\r\n\r\n"
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
### System Info
transformer version: 4.19.2
platform: Linux
python: 3.8.13
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-large")
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large")
inputs = tokenizer("Summarize the following text: Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital. Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well. Therefore, Peter stayed with her at the hospital for 3 days without leaving.", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
>>> ['Peter and Elizabeth went to a party together. Elizabeth collapsed and was rushed to the']
```
### Expected behavior
The generated text isn't complete. It seems to be truncated. I just use the example codes, so I have no idea about this problem.
Thanks for your help :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20576/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20575
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20575/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20575/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20575/events
|
https://github.com/huggingface/transformers/issues/20575
| 1,474,509,382
|
I_kwDOCUB6oc5X4zpG
| 20,575
|
model.generate() function raise a exception
|
{
"login": "yumoqing",
"id": 2088520,
"node_id": "MDQ6VXNlcjIwODg1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2088520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yumoqing",
"html_url": "https://github.com/yumoqing",
"followers_url": "https://api.github.com/users/yumoqing/followers",
"following_url": "https://api.github.com/users/yumoqing/following{/other_user}",
"gists_url": "https://api.github.com/users/yumoqing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yumoqing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yumoqing/subscriptions",
"organizations_url": "https://api.github.com/users/yumoqing/orgs",
"repos_url": "https://api.github.com/users/yumoqing/repos",
"events_url": "https://api.github.com/users/yumoqing/events{/privacy}",
"received_events_url": "https://api.github.com/users/yumoqing/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi and @gante ",
"Hey @yumoqing, \r\n\r\nIn this case, it's just the code snippet on the [model README](https://huggingface.co/facebook/s2t-small-librispeech-asr) that's wrong. Pasting a corrected version of the code snippet that you can use:\r\n```python\r\nimport torch\r\nfrom transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration\r\nfrom datasets import load_dataset\r\n\r\nmodel = Speech2TextForConditionalGeneration.from_pretrained(\"facebook/s2t-small-librispeech-asr\")\r\nprocessor = Speech2TextProcessor.from_pretrained(\"facebook/s2t-small-librispeech-asr\")\r\n\r\nds = load_dataset(\"patrickvonplaten/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\nsample = ds[0][\"audio\"]\r\n\r\ninputs = processor(sample[\"array\"], sampling_rate=sample[\"sampling_rate\"], return_tensors=\"pt\")\r\ngenerated_ids = model.generate(input_features=inputs[\"input_features\"], attention_mask=inputs[\"attention_mask\"])\r\ntranscription = processor.batch_decode(generated_ids, skip_special_tokens=True)\r\nprint(transcription)\r\n```\r\n**Print Output:**\r\n```\r\n['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel']\r\n```\r\n\r\nI've opened a PR to update the example on the model's README card on the Hub: https://huggingface.co/facebook/s2t-small-librispeech-asr/discussions/2/files"
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117 (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])https://huggingface.co/docs/transformers/model_doc/speech_to_text
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
print(f'{transcription=}')
```
1. the above code copy/paste from [huggingface.co](https://huggingface.co/docs/transformers/model_doc/speech_to_text)
2. running the script
3. get a exception
```
Traceback (most recent call last):
File "/home/ymq/tmp/pretrained-models/test/t.py", line 16, in <module>
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
File "/home/ymq/py3/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/ymq/py3/lib/python3.10/site-packages/transformers/generation_utils.py", line 1208, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "/home/ymq/py3/lib/python3.10/site-packages/transformers/generation_utils.py", line 910, in _validate_model_kwargs
raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['input_ids'] (note: typos in the generate arguments will also show up in this list)
```
### Expected behavior
get stt result text
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20575/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20574
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20574/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20574/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20574/events
|
https://github.com/huggingface/transformers/issues/20574
| 1,474,452,859
|
I_kwDOCUB6oc5X4l17
| 20,574
|
[i18n-<languageCode>] Translating docs to <languageName>
|
{
"login": "Mellobrainbox",
"id": 56656715,
"node_id": "MDQ6VXNlcjU2NjU2NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/56656715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mellobrainbox",
"html_url": "https://github.com/Mellobrainbox",
"followers_url": "https://api.github.com/users/Mellobrainbox/followers",
"following_url": "https://api.github.com/users/Mellobrainbox/following{/other_user}",
"gists_url": "https://api.github.com/users/Mellobrainbox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mellobrainbox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mellobrainbox/subscriptions",
"organizations_url": "https://api.github.com/users/Mellobrainbox/orgs",
"repos_url": "https://api.github.com/users/Mellobrainbox/repos",
"events_url": "https://api.github.com/users/Mellobrainbox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mellobrainbox/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"Hi @Mellobrainbox, could you fill the template with the language you are interested in?"
] | 1,670
| 1,670
| null |
NONE
| null |
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community π (currently 0 out of 267 complete)
Who would want to translate? Please follow the π€ [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers π€).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* π If you'd like others to help you with the translation, you can also post in the π€ [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go π₯
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20574/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20574/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/20573
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20573/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20573/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20573/events
|
https://github.com/huggingface/transformers/pull/20573
| 1,474,308,188
|
PR_kwDOCUB6oc5EOBCd
| 20,573
|
Add Multi Resolution Analysis (MRA)
|
{
"login": "novice03",
"id": 44259234,
"node_id": "MDQ6VXNlcjQ0MjU5MjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/44259234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/novice03",
"html_url": "https://github.com/novice03",
"followers_url": "https://api.github.com/users/novice03/followers",
"following_url": "https://api.github.com/users/novice03/following{/other_user}",
"gists_url": "https://api.github.com/users/novice03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/novice03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/novice03/subscriptions",
"organizations_url": "https://api.github.com/users/novice03/orgs",
"repos_url": "https://api.github.com/users/novice03/repos",
"events_url": "https://api.github.com/users/novice03/events{/privacy}",
"received_events_url": "https://api.github.com/users/novice03/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false
| null |
[] |
[
"cc @amyeroberts and @NielsRogge ",
"Hello @amyeroberts, thank you so much for going over the code! I've made changes to my branch and left some questions in the above suggestions. Please take a look at them when you are available. I also have a few additional questions and clarifications:\r\n\r\n> Can you add any necessary optional dependencies\r\n\r\nIf I'm not mistaken MRA does not need any optional dependencies. All functions/ classes only require torch and the CUDA kernels. Unfortunately, unlike YOSO, MRA requires CUDA kernels - it cannot run without them. Could it be that the tests are failing because the kernels are not being loaded? If so, how can we handle this dependency on CUDA kernels in the HF implementation? ",
"Hello @amyeroberts, pinging to follow up on this PR. ",
"Hi @novice03 - thanks for the ping. Re-reviewing now! ",
"Before I start reviewing more, could you:\r\n- fix the conflicts\r\n- make sure all tests pass (you can run them locally with pytest)\r\n- make sure all quality checks pass (you can run `make fixup` for most changes that can be automated then look at `make repo-consistency` locally to see where other quality scripts are unhappy).",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20573). All of your documentation changes will be reflected on that endpoint.",
"Thank you for the second review @amyeroberts. I've made most, if not all, the changes you suggested. \r\n\r\nI'm now working on fixing the tests. `tests_torch` fails because the CUDA kernels are not being loaded correctly. I've added some extra code (for e.g. calling `load_cuda_kernels()` in `mra2_attention()`) for debugging purposes, which I'll remove it later. @sgugger I might need your help in understanding how to correctly load the kernels. I get `RuntimeError: Ninja is required to load C++ extensions` for `test_determinism`. Are Ninja and CUDA not available when running the tests?",
"No they are not, as most users won't have them installed. Both are only installed in the runners that run the nightly tests.",
"Thanks @sgugger. How can I use the nightly test runners instead? ",
"@novice03 Unfortunately you can't use the nightly tests environment for the full test suite. As @sgugger notes, most users won't have ninja and cuda installed in their environment - this is something for which the model will need to be robust. \r\n\r\nI mentioned in one of [my comments](https://github.com/huggingface/transformers/pull/20573/files#r1072316307) that deformable DETR has a safe way of loading the CUDA kernels. I think this would be the first things to address as this handles the case when ninja and cuda aren't in the environment. ",
"Thank you @amyeroberts. I didn't notice that DETR can run using regular PyTorch - without CUDA kernels. MRA does not have this functionality yet, so I will be working on this. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hello @amyeroberts, I've been talking to the authors about writing non-CUDA PyTorch code for MRA. It seems that writing a PyTorch alternative for MRA, especially the `sparse_max` function, will be extremely inefficient and infeasable. I am currently looking into other alternatives for running CUDA kernels on machines without Ninja. How about pre-compiling the CUDA kernels, add the .so .egg etc files in the repo, and import it during run-time? This way we can provide the pre-compiled kernels to users - we can compile it on a machine that has Ninja and import it later. ",
"Hi @novice03 - thanks for the update. \r\n\r\nI realise my previous comment might not have been completely clear and didn't catch that in your reply. The models relying on custom CUDA kernels don't have pytorch equivalents implemented. Rather, they have a safe way of importing the models if ninja and cuda aren't available e.g. [in deformable detr](https://github.com/huggingface/transformers/blob/a5392ee7470f34bb48417ca2af97b9189f0eda70/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L54), the `is_torch_cuda_available` and `is_ninja_available` functions are used to conditionally load the cuda kernels. If they aren't available, [dummy variables are used](https://github.com/huggingface/transformers/blob/a5392ee7470f34bb48417ca2af97b9189f0eda70/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L62). ",
"Thanks for the clarification. I assumed we needed a PyTorch implementation since I saw one in [deformable detr](https://github.com/huggingface/transformers/blob/a5392ee7470f34bb48417ca2af97b9189f0eda70/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L548). However, if equivalent PyTorch functions are not required, then I can just add dummy variables/functions for MRA. ",
"Hello @amyeroberts and @sgugger, I've added safe loading of the CUDA kernels and made sure all the tests pass. I also uploaded a checkpoint to the hub. Please take a look at the updated code. ",
"Hi @sgugger and @amyeroberts, I've resolved all conflicts and ensured that all the tests pass. Can you please take a look at the updated code?",
"Thank you @sgugger. I've addressed all of your suggestions. Please take a look at the updated code. ",
"Thanks for the corrections @sgugger! I've made all the changes suggested and taken another look at the code (fixed some urls and tests). It looks like there are merge conflicts because of the .mdx files on my branch. How do you recommend resolving the conflicts? Should I change all the .mdx files to .md?",
"Yes, you will need to merge main into your branch (or rebase if you prefer) to fix the conflicts and also switch all your mdx to md.\r\n\r\nThis is because GitHub recently made changes to the UI of the diffs for MDX files, which makes it really hard to review PRs, so we switched everything to Markdown. Sorry about that.",
"Hi @sgugger, I might need some help in correctly switching from mdx to md. I tried renaming and git mv, but this still creates a lot of conflicts. What do you suggest?",
"No you can't rename them in this PR, you need to rebase on main or merge the main branch into yours.",
"Continuing in #24513 "
] | 1,670
| 1,689
| 1,687
|
CONTRIBUTOR
| null |
# Add Multi Resolution Analysis (MRA) for Approximate Self-Attention (Old PR)
This PR adds the MRA model to the repository.
Paper: [https://arxiv.org/pdf/2207.10284.pdf](https://arxiv.org/pdf/2207.10284.pdf)
Code: [https://github.com/mlpen/mra-attention](https://github.com/mlpen/mra-attention)
To-do:
- [ ] Improve loading cuda kernels
- [ ] Improve formatting and documentation
- [ ] Upload checkpoints
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20573/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20573",
"html_url": "https://github.com/huggingface/transformers/pull/20573",
"diff_url": "https://github.com/huggingface/transformers/pull/20573.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20573.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20572
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20572/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20572/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20572/events
|
https://github.com/huggingface/transformers/pull/20572
| 1,474,307,545
|
PR_kwDOCUB6oc5EOA5z
| 20,572
|
Add OneFormer Model
|
{
"login": "praeclarumjj3",
"id": 54928629,
"node_id": "MDQ6VXNlcjU0OTI4NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/54928629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/praeclarumjj3",
"html_url": "https://github.com/praeclarumjj3",
"followers_url": "https://api.github.com/users/praeclarumjj3/followers",
"following_url": "https://api.github.com/users/praeclarumjj3/following{/other_user}",
"gists_url": "https://api.github.com/users/praeclarumjj3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/praeclarumjj3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/praeclarumjj3/subscriptions",
"organizations_url": "https://api.github.com/users/praeclarumjj3/orgs",
"repos_url": "https://api.github.com/users/praeclarumjj3/repos",
"events_url": "https://api.github.com/users/praeclarumjj3/events{/privacy}",
"received_events_url": "https://api.github.com/users/praeclarumjj3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds the Code, Documentation and Tests for OneFormer proposed in [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220). I have also opened a [PR](https://huggingface.co/datasets/huggingface/documentation-images/discussions/11) to add the documentation images to `huggingface/documentation-images`.
I have not integrated OneFormer into the [`image-segmentation`](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/image_segmentation.py) pipeline yet. As OneFormer takes two inputs (image and task token), I will need to create a new pipeline. Please let me know if I should add that to this PR or open a new one.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @NielsRogge @amyeroberts @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20572/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20572",
"html_url": "https://github.com/huggingface/transformers/pull/20572",
"diff_url": "https://github.com/huggingface/transformers/pull/20572.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20572.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20571
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20571/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20571/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20571/events
|
https://github.com/huggingface/transformers/issues/20571
| 1,474,283,920
|
I_kwDOCUB6oc5X38mQ
| 20,571
|
Can not sample next tokens with GPT-2 model with GPT2Config `reorder_and_upcast_attn=True`
|
{
"login": "hogru",
"id": 3949272,
"node_id": "MDQ6VXNlcjM5NDkyNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3949272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hogru",
"html_url": "https://github.com/hogru",
"followers_url": "https://api.github.com/users/hogru/followers",
"following_url": "https://api.github.com/users/hogru/following{/other_user}",
"gists_url": "https://api.github.com/users/hogru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hogru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hogru/subscriptions",
"organizations_url": "https://api.github.com/users/hogru/orgs",
"repos_url": "https://api.github.com/users/hogru/repos",
"events_url": "https://api.github.com/users/hogru/events{/privacy}",
"received_events_url": "https://api.github.com/users/hogru/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"Hi @hogru πΒ Having a popular project like `transformers` means we get many support and feature requests β if we want to maximize how much we help the community, the community has to help us stay productive π\r\n\r\nTo that end, please share a *short* script where the issue is clearly reproducible on *any* computer. In your particular case, your example is missing the model itself, which can influence the `sample` call in many ways (e.g. depending on the model config). Thank you π€",
"Hi @gante, I get this. Since I can avoid the issue by not using that option and you hint at the issue being specific to my environment/config/model/... I save you and me some time and \"close\" the issue. I assumed this to be a generic issue and wanted to let you know. Solving it for my specific use case is not a priority.",
"Hey @hogru -- actually it may be an issue that happens on all sorts of environments and models :) \r\n\r\nI didn't mean to sound dismissive. The limitation here is manpower: we have many issues per maintainer, so our focus is on 1) common issues; 2) issues where we can pin the issue. This is the first time I see this issue, so 1) doesn't apply. For 2) to happen, I need to be able to reproduce the issue quickly, otherwise it will be a huge time sink to find the exact problem so it can be fixed π€ That's where the short script comes in!",
"Hi @gante, thanks for reaching out, I did not perceive it as dismissive and my answer was intended to be in a friendly voice. But English is not my native language... And I am new to hugging face which means that (a) there's a chance that I overlook something obvious and (b) I need to figure out how to push the model to the hub, ahem. I know, probably pretty simple, but I am uncertain if it's intended to hold models for debugging. And again, following your argument, your time is likely better invested in other areas. So, all good here."
] | 1,670
| 1,671
| 1,671
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.12.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no (for debugging/generating, training done on GPU)
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten (referenced in both GPT-2 and Text generation)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The following code uses a GPT-2 model trained from scratch. It works without problems when `reorder_and_upcast_attn=False`(the default) but with `reorder_and_upcast_attn=True` it throws `RuntimeError: probability tensor contains either Β΄infΒ΄, Β΄nanΒ΄ or element < 0` when calling `model.sample()`. You might need to call `sample()` several times (I did 100 calls in my experiments).
```python
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = PreTrainedTokenizerFast.from_pretrained(model_path)
mol = "CCC(C)(C)"
mol_encoded = tokenizer(
mol,
add_special_tokens=True,
padding=True,
)
input_ids = mol_encoded["input_ids"]
logits = model(**mol_encoded).logits[0]
assert not torch.any(torch.isinf(logits)) # just a safety check for debugging
assert not torch.any(torch.isnan(logits)) # just a safety check for debugging
# "Manual" sampling, works in both cases
probs = torch.nn.functional.softmax(logits, dim=-1)
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
# Hugging Face sampling
next_tokens = model.sample(
input_ids,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=100,
) # This raises the RuntimeError exception when `reorder_and_upcast_attn=True`
```
### Expected behavior
I can think of the following options
- IF it's a bug, fix it ;-), i.e. `reorder_and_upcast_attn=True` should work for text generation
- if "massaging" the logits with `logits_processor` or `logits_warper` is required, update the docs
- improve my understanding of the option...
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20571/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20570
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20570/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20570/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20570/events
|
https://github.com/huggingface/transformers/pull/20570
| 1,474,259,052
|
PR_kwDOCUB6oc5EN2sC
| 20,570
|
Add TFBartForSequenceClassification
|
{
"login": "uglyboxer",
"id": 12128540,
"node_id": "MDQ6VXNlcjEyMTI4NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/12128540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uglyboxer",
"html_url": "https://github.com/uglyboxer",
"followers_url": "https://api.github.com/users/uglyboxer/followers",
"following_url": "https://api.github.com/users/uglyboxer/following{/other_user}",
"gists_url": "https://api.github.com/users/uglyboxer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uglyboxer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uglyboxer/subscriptions",
"organizations_url": "https://api.github.com/users/uglyboxer/orgs",
"repos_url": "https://api.github.com/users/uglyboxer/repos",
"events_url": "https://api.github.com/users/uglyboxer/events{/privacy}",
"received_events_url": "https://api.github.com/users/uglyboxer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh @sgugger Thank you both for your insightful reviews. Pushed some changes and posed a question back. ",
"Thanks again @ydshieh\r\n\r\nI'm afraid the special handling is necessary as the test `test_save_load_after_resize_token_embeddings` does some extra magic to alter the input ids. I took @sgugger 's suggestion to overwrite the test in the BartTester and move that logic into the test itself. That should clear up common.",
"> Good to go with the nits, thanks for bearing with us!\r\n\r\nHappy to take care of them! This is my first PR, thanks for all the help, and the seamless process.",
"> > Good to go with the nits, thanks for bearing with us!\r\n> \r\n> Happy to take care of them! This is my first PR, thanks for all the help, and the seamless process.\r\n\r\nYou are doing a great job! π― \r\n"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
This adds a sequence classification head to the TensorFlow implementation of BART, following the pattern of `BartForSequenceClassification` (PyTorch version)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # [Issue 19653](https://github.com/huggingface/transformers/issues/19653)
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20570/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20570",
"html_url": "https://github.com/huggingface/transformers/pull/20570",
"diff_url": "https://github.com/huggingface/transformers/pull/20570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20570.patch",
"merged_at": 1670432739000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20569
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20569/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20569/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20569/events
|
https://github.com/huggingface/transformers/pull/20569
| 1,474,202,620
|
PR_kwDOCUB6oc5ENrD4
| 20,569
|
Spanish translation of asr.mdx and add_new_pipeline.mdx
|
{
"login": "alceballosa",
"id": 23227057,
"node_id": "MDQ6VXNlcjIzMjI3MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/23227057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alceballosa",
"html_url": "https://github.com/alceballosa",
"followers_url": "https://api.github.com/users/alceballosa/followers",
"following_url": "https://api.github.com/users/alceballosa/following{/other_user}",
"gists_url": "https://api.github.com/users/alceballosa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alceballosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alceballosa/subscriptions",
"organizations_url": "https://api.github.com/users/alceballosa/orgs",
"repos_url": "https://api.github.com/users/alceballosa/repos",
"events_url": "https://api.github.com/users/alceballosa/events{/privacy}",
"received_events_url": "https://api.github.com/users/alceballosa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I think I committed all the suggested changes, thanks @osanseviero !",
"Thanks again for your contribution!"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Translates `asr.mdx` and `add_new_pipeline.mdx` into Spanish. Also updates the `_toctree.yml` file accordingly. Includes minor typo corrections for the original versions of both files and the translated version of a file I had previously worked on.
Related to #15947
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section?
@osanseviero @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20569/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20569",
"html_url": "https://github.com/huggingface/transformers/pull/20569",
"diff_url": "https://github.com/huggingface/transformers/pull/20569.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20569.patch",
"merged_at": 1670855004000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20568
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20568/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20568/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20568/events
|
https://github.com/huggingface/transformers/pull/20568
| 1,474,188,115
|
PR_kwDOCUB6oc5ENn9m
| 20,568
|
Added missing `test_tokenization_led`
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh Can you give some more points of what exactly is to be done?\r\n\r\nAs per the points given by you, I need to first create 2 texts let's say `A long paragraph for summarization.` and `Another paragraph for`, and then encode them as `tokenizer.encode_plus(\"Another paragraph for\", padding=False)` passing padding as False so that it doesn't apply padding to text, and then we have to create a list of `global_attention_mask` let's say [0,0,0,0,0], doing this for both the text and then pass encoded_inputs along with `global_attention_mask` to the `tokenizer._pad()`",
"@IMvision12 Yes, that's the idea :-). Only at the end, you can do `tokenizer.pad()` instead -> it will call `_pad` internally.",
"@ydshieh Also what I really need to check in `assertEqual`?",
"We need to check the outputs after padding contains the key `global_attention_mask` and its value is the same as the expected one, which is the `global_attention_mask` being padded. You will either have to take a quick look in ` _pad` or at least run one example to get a better idea (which should be easy enough) what it does :-)",
"@ydshieh can you take a quick look at this function\r\nIs this expected to be done?\r\n\r\n```\r\n def test_global_attention(self):\r\n text = [\"A long paragraph for summarization.\", \"Another paragraph for summarization.\"]\r\n tokenizer = self.default_tokenizer_fast()\r\n \r\n input_1 = tokenizer.encode_plus(text[0], padding=False)\r\n input_1['global_attention_mask'] = [0,0,0,0,0]\r\n outputs_1 = tokenizer.pad(input_1)\r\n self.assertEqual(outputs_1['global_attention_mask'],[0, 0, 0, 0, 0, -1, -1, -1, -1])\r\n\r\n input_2 = tokenizer.encode_plus(text[1], padding=False)\r\n input_2['global_attention_mask'] = [0,0,0,0]\r\n outputs_2 = tokenizer.pad(input_2)\r\n self.assertEqual(outputs_2['global_attention_mask'],[0, 0, 0, 0, -1])\r\n\r\n```",
"@IMvision12 \r\n\r\nThe idea is to encode the 2 texts together without padding, and send the encoded outputs with `global_attention_mask` (not padded neither) to `.pad`.\r\n\r\nYou code above pads each sequence, which won't have any padding. The padding only happens with multiple sequences where the length are different.",
"@ydshieh sorry for pinging you so many times\r\nAlso i have created this colab for understanding https://colab.research.google.com/drive/1jYwtsE41ouAeh5aNzfWZ2LNLizFOwvQr?usp=sharing\r\n```\r\ndef test_global_attention_mask(self):\r\n text = [\"A long paragraph.\", \"Hi I am using huggingface transformers\"]\r\n tokenizer = self.default_tokenizer_fast()\r\n \r\n inputs = tokenizer.encode_plus(text, padding=False)\r\n inputs['global_attention_mask'] = [0,0,0,0,0,0,0,0]\r\n outputs = tokenizer.pad(inputs)\r\n self.assertEqual(outputs['global_attention_mask'],[0, 0, 0, 0, 0, 0, 0, 0, -1, -1, -1, -1, -1, -1, -1, -1])\r\n```",
"Hi, hope the following explains it more clearly :-)\r\n\r\n\r\nFirst, batch encoding\r\n```python\r\ntext = [\"A long paragraph.\", \"Hi I am using huggingface transformers\"]\r\nx = tokenizer(text, padding=False)\r\nx\r\n```\r\n\r\nAdd `global_attention_mask` that is not padded\r\n```python\r\nx['global_attention_mask'] = [[0] * len(y) for y in x[\"input_ids\"]]\r\nx\r\n```\r\n\r\nPad the whole un-padded inputs\r\n```\r\ntokenizer.pad(x)\r\n```",
"I am not sure why `tests_pipelines_tf` are failing",
"No need to worry about the TF pipeline test. I will take a look - it's probably irrelevant to this PR.",
"Could you update your local main branch , and rebase your working branch on local `main`?",
"@ydshieh Done! any more changes?",
"@ydshieh Thanks for a concise explanation of `global_attention_mask` and guidance!!"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Added missing `test_tokenization_led`, was similar to Bart tokenizer made some changes by testing it in local environment
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20568/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20568",
"html_url": "https://github.com/huggingface/transformers/pull/20568",
"diff_url": "https://github.com/huggingface/transformers/pull/20568.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20568.patch",
"merged_at": 1670529322000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20567
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20567/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20567/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20567/events
|
https://github.com/huggingface/transformers/issues/20567
| 1,474,152,325
|
I_kwDOCUB6oc5X3ceF
| 20,567
|
Whether to use 'logits' or 'loss' in LabelSmoother
|
{
"login": "zalmanchen",
"id": 70104951,
"node_id": "MDQ6VXNlcjcwMTA0OTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/70104951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zalmanchen",
"html_url": "https://github.com/zalmanchen",
"followers_url": "https://api.github.com/users/zalmanchen/followers",
"following_url": "https://api.github.com/users/zalmanchen/following{/other_user}",
"gists_url": "https://api.github.com/users/zalmanchen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zalmanchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zalmanchen/subscriptions",
"organizations_url": "https://api.github.com/users/zalmanchen/orgs",
"repos_url": "https://api.github.com/users/zalmanchen/repos",
"events_url": "https://api.github.com/users/zalmanchen/events{/privacy}",
"received_events_url": "https://api.github.com/users/zalmanchen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to ask such questions, as we keep issues for bugs and feature requests only. The labels are popped when we use label smoothing, so the loss is not included in the ouputs/",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### System Info
- 'transformers' version: 4.24.0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
it is in "transformers/trainer.py"

it is in "transformers/trainer_pt_utils.py"

### Expected behavior
In fact, it is not a bug problem! I just have doubts about the variable used in a function
In "transformers/trainer_pt_utils.py", the class "LabelSmoother \__call\__" function, I noticed that it uses the **'logits'** value, but **'output [0]'** is selected under 'else' condition, if I am not mistaken, **'output[0]'** should represent loss(I use the **_BartForConditionalGeneration_**) . So is there any problem here?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20567/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20566
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20566/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20566/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20566/events
|
https://github.com/huggingface/transformers/pull/20566
| 1,474,130,527
|
PR_kwDOCUB6oc5ENcft
| 20,566
|
Spanish translation of the file debugging.mdx
|
{
"login": "SimplyJuanjo",
"id": 87780148,
"node_id": "MDQ6VXNlcjg3NzgwMTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/87780148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SimplyJuanjo",
"html_url": "https://github.com/SimplyJuanjo",
"followers_url": "https://api.github.com/users/SimplyJuanjo/followers",
"following_url": "https://api.github.com/users/SimplyJuanjo/following{/other_user}",
"gists_url": "https://api.github.com/users/SimplyJuanjo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SimplyJuanjo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SimplyJuanjo/subscriptions",
"organizations_url": "https://api.github.com/users/SimplyJuanjo/orgs",
"repos_url": "https://api.github.com/users/SimplyJuanjo/repos",
"events_url": "https://api.github.com/users/SimplyJuanjo/events{/privacy}",
"received_events_url": "https://api.github.com/users/SimplyJuanjo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Can you just add the new file to the TOC of the Spanish doc? (in `transformers/docs/source/es/_toctree.yml`)",
"@sgugger like that is fine? Tried to mimic the eng TOC creating the same section \"Rendimiento y escalabilidad\"",
"_The documentation is not available anymore as the PR was closed or merged._",
"All good, thanks!"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #[15947](https://github.com/huggingface/transformers/issues/15947)
Adds the Spanish version of [debugging.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/en/debugging.mdx) to [transformers/docs/source/es](https://github.com/huggingface/transformers/tree/main/docs/source/es)
Also found one typo error in the original doc, so I fixed it also.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
## Who can review?
@omarespejel @osanseviero @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20566/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20566",
"html_url": "https://github.com/huggingface/transformers/pull/20566",
"diff_url": "https://github.com/huggingface/transformers/pull/20566.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20566.patch",
"merged_at": 1670859537000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20565
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20565/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20565/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20565/events
|
https://github.com/huggingface/transformers/issues/20565
| 1,473,950,266
|
I_kwDOCUB6oc5X2rI6
| 20,565
|
Train with multiple eval datasets raises an Exception
|
{
"login": "eyalmazuz",
"id": 34383384,
"node_id": "MDQ6VXNlcjM0MzgzMzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/34383384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyalmazuz",
"html_url": "https://github.com/eyalmazuz",
"followers_url": "https://api.github.com/users/eyalmazuz/followers",
"following_url": "https://api.github.com/users/eyalmazuz/following{/other_user}",
"gists_url": "https://api.github.com/users/eyalmazuz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyalmazuz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyalmazuz/subscriptions",
"organizations_url": "https://api.github.com/users/eyalmazuz/orgs",
"repos_url": "https://api.github.com/users/eyalmazuz/repos",
"events_url": "https://api.github.com/users/eyalmazuz/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyalmazuz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"A workaround (or solution) for the problem that I found is as follows:\r\n\r\n1. Add an optional parameter to evaluate the method in trainer called ``multiple`` and default it to False\r\n2. change the line of code logging happens from:\r\n```\r\nself.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, output.metrics)\r\n```\r\nto:\r\n```\r\nif not multiple:\r\n self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, output.metrics)\r\n```\r\n3. on ``_maybe_log_save_evaluate`` change the following block from:\r\n```\r\n if self.control.should_evaluate:\r\n if isinstance(self.eval_dataset, dict):\r\n for eval_dataset_name, eval_dataset in self.eval_dataset.items():\r\n metrics = self.evaluate(\r\n eval_dataset=eval_dataset,\r\n ignore_keys=ignore_keys_for_eval,\r\n metric_key_prefix=f\"eval_{eval_dataset_name}\",\r\n )\r\n else:\r\n metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)\r\n self._report_to_hp_search(trial, self.state.global_step, metrics)\r\n\r\n if self.control.should_save:\r\n self._save_checkpoint(model, trial, metrics=metrics)\r\n self.control = self.callback_handler.on_save(self.args, self.state, self.control)\r\n```\r\nto\r\n```\r\nif self.control.should_evaluate:\r\n if isinstance(self.eval_dataset, dict):\r\n all_metrics = {}\r\n for eval_dataset_name, eval_dataset in self.eval_dataset.items():\r\n metrics = self.evaluate(\r\n eval_dataset=eval_dataset,\r\n ignore_keys=ignore_keys_for_eval,\r\n metric_key_prefix=f\"eval_{eval_dataset_name}\", multiple=True,\r\n )\r\n all_metrics = {**all_metrics, **metrics}\r\n self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, all_metrics)\r\n metrics = all_metrics\r\n else:\r\n metrics = self.evaluate(ignore_keys=ignore_keys_for_eval, multiple=False)\r\n self._report_to_hp_search(trial, self.state.global_step, metrics)\r\n\r\n if self.control.should_save:\r\n self._save_checkpoint(model, trial, metrics=metrics)\r\n self.control = self.callback_handler.on_save(self.args, self.state, self.control)\r\n```\r\n\r\nNow I get the desired outcome.\r\n\r\n\r\n\r\n\r\nedit: this solution breaks the tqdm progress bar",
"Thanks for the report. The Trainer does not support multiple evaluation datasets when in a notebook indeed.\r\nAs a workaround, you can also disable the notebook progress bars by doing `trainer.remove_callback(NotebookProgressCallback)` to avoid having the issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"If I were to disable the notebook progress bars by doing` trainer.remove_callback(NotebookProgressCallback)`, then how do I view/retrieve the evaluation results on the multiple datasets (without using print statements or logging)?",
"I think the multiple dataset notebook support will be added here: #25796"
] | 1,670
| 1,694
| 1,673
|
NONE
| null |
### System Info
Transformers: 4.25.1
Python: 3.10.8
OS: Manajro-KDE
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Create the following compute metrics method:
```def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
accuracy_score = accuracy.compute(predictions=predictions, references=labels)
auc_score = auc.compute(prediction_scores=logits[:, 1], references=labels)
f1_score = f1.compute(predictions=predictions, references=labels)
aupr = average_precision_score(y_score=logits[:, 1], y_true=labels)
return {**f1_score , **{'PR-AUC': aupr}, **accuracy_score, **auc_score}
```
2. Create a trainer object with multiple eval datasets:
```
training_args = TrainingArguments(
output_dir="./results",
learning_rate=2e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=4,
num_train_epochs=3,
weight_decay=0.01,
evaluation_strategy=IntervalStrategy.STEPS,
report_to='wandb',
run_name='ChemBERTa Test',
log_level='critical',
logging_steps=1,
)
```
```trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset["train"],
eval_dataset={'Validation': dataset["validation"], 'Test': dataset["test"]},
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
```
4. Run the trainer
``trainer.train()``
Result:


the problem arises in ``transformers/utils/notebook.py/``
in line 244, it raises ``KeyError: 'Validation F1'``
The reason for that is that only on the first iteration it populates the names of the columns for the table it prints in the notebook (lines 237-243).
But the trainer runs each dataset in a standalone ``self.evaluate`` run
So when the 2nd ``self.evaluate`` with the test dataset calls
``self.callback_handler.on_evaluate(self.args, self.state, self.control, output.metrics)``
the table already has the names of the columns from the validation set, so lines 237-243 don't run again
and then on line 244, since it's the test dataset, all the metrics are called "Test <metric>" so it raises an error since it iterates over the table columns and doesn't find the validation metrics.
A workaround is removing the if statement in line 238 and having it update columns each time it's called
and changing line 244 to: ``self.inner_table.append([values[c] if c in values else 'NaN' for c in columns])``
i.e.
```
def write_line(self, values):
"""
Write the values in the inner table.
Args:
values (`Dict[str, float]`): The values to display.
"""
if self.inner_table is None:
self.inner_table = [list(values.keys()), list(values.values())]
else:
columns = self.inner_table[0]
# if len(self.inner_table) == 1:
# # We give a chance to update the column names at the first iteration
for key in values.keys():
if key not in columns:
columns.append(key)
self.inner_table[0] = columns
print(f'{columns=}, {values=}')
self.inner_table.append([values[c] if c in values else 'NaN' for c in columns])
```
(the print is only for debugging)
but then it creates a weird table where the rows are duplicated at the logging step
and half of the columns in the first row are NaN
and the other half is NaN in the 2nd row.

edit: Another bug is on ``NotebookProgressCallback.on_evaluate`` lines 342-345
```
for k, v in metrics.items():
if k == f"{metric_key_prefix}_loss":
values["Validation Loss"] = v
```
This forces the output to consistently be named validation loss, even if I use the test dataset; this means that in the table created I don't get 2 different losses printed, cause the test loss overrides the validation loss
### Expected behavior
A single row containing both the validation and test metrics
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20565/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20565/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20564
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20564/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20564/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20564/events
|
https://github.com/huggingface/transformers/pull/20564
| 1,473,684,904
|
PR_kwDOCUB6oc5EL4Ua
| 20,564
|
make states contiguous for past_key_values
|
{
"login": "xyjigsaw",
"id": 26840761,
"node_id": "MDQ6VXNlcjI2ODQwNzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/26840761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyjigsaw",
"html_url": "https://github.com/xyjigsaw",
"followers_url": "https://api.github.com/users/xyjigsaw/followers",
"following_url": "https://api.github.com/users/xyjigsaw/following{/other_user}",
"gists_url": "https://api.github.com/users/xyjigsaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyjigsaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyjigsaw/subscriptions",
"organizations_url": "https://api.github.com/users/xyjigsaw/orgs",
"repos_url": "https://api.github.com/users/xyjigsaw/repos",
"events_url": "https://api.github.com/users/xyjigsaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyjigsaw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
Add contiguous for key and value states.
If someone uses past_key_values, it seems to raise the following exception:
`
RuntimeError: view size is not compatible with input tensorβs size and stride ...
`
since bart executes torch.cat in BartAttention class:
`
key_states = torch.cat([past_key_value[0], key_states], dim=2)
`
`
value_states = torch.cat([past_key_value[1], value_states], dim=2)
`
Thus, we should make key_states and value_states contiguous.
@patrickvonplaten
---
More, bart cannot correctly process the length of attention_mask when the item of past_key_values is added.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20564/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20564",
"html_url": "https://github.com/huggingface/transformers/pull/20564",
"diff_url": "https://github.com/huggingface/transformers/pull/20564.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20564.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20563
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20563/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20563/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20563/events
|
https://github.com/huggingface/transformers/issues/20563
| 1,473,678,202
|
I_kwDOCUB6oc5X1ot6
| 20,563
|
Model bart cannot correctly process the length of attention_mask when the item of past_key_values is added.
|
{
"login": "xyjigsaw",
"id": 26840761,
"node_id": "MDQ6VXNlcjI2ODQwNzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/26840761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyjigsaw",
"html_url": "https://github.com/xyjigsaw",
"followers_url": "https://api.github.com/users/xyjigsaw/followers",
"following_url": "https://api.github.com/users/xyjigsaw/following{/other_user}",
"gists_url": "https://api.github.com/users/xyjigsaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyjigsaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyjigsaw/subscriptions",
"organizations_url": "https://api.github.com/users/xyjigsaw/orgs",
"repos_url": "https://api.github.com/users/xyjigsaw/repos",
"events_url": "https://api.github.com/users/xyjigsaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyjigsaw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker \r\n\r\n@xyjigsaw could you please add a complete reproducible code snippet here though? \r\n\r\nWe cannot run:\r\n\r\n```python\r\noutputs = self.bart(input_ids=input_ids, attention_mask=attention_mask, past_key_values=past_key_values, labels=labels)\r\n```\r\n\r\nbecause we don't know what `input_ids`, etc... is.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.18.0
- Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patil-sura
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
execute the code:
`
outputs = self.bart(input_ids=input_ids,
attention_mask=attention_mask,
past_key_values=past_key_values,
labels=labels)
`
Model bart cannot correctly process the length of attention_mask when the item of past_key_values is added.
Additionally,
If someone uses past_key_values, it seems to raise the following exception:
`
RuntimeError: view size is not compatible with input tensorβs size and stride ...
`
since bart executes torch.cat in BartAttention class:
`
key_states = torch.cat([past_key_value[0], key_states], dim=2)
`
`
value_states = torch.cat([past_key_value[1], value_states], dim=2)
`
Thus, we should make key_states and value_states contiguous.
@patrickvonplaten
---
### Expected behavior
It will run correctly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20563/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20562
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20562/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20562/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20562/events
|
https://github.com/huggingface/transformers/pull/20562
| 1,473,595,466
|
PR_kwDOCUB6oc5ELktv
| 20,562
|
Clip floating point constants to bf16 range to avoid inf conversion
|
{
"login": "sangeethabal",
"id": 83724701,
"node_id": "MDQ6VXNlcjgzNzI0NzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83724701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sangeethabal",
"html_url": "https://github.com/sangeethabal",
"followers_url": "https://api.github.com/users/sangeethabal/followers",
"following_url": "https://api.github.com/users/sangeethabal/following{/other_user}",
"gists_url": "https://api.github.com/users/sangeethabal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sangeethabal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sangeethabal/subscriptions",
"organizations_url": "https://api.github.com/users/sangeethabal/orgs",
"repos_url": "https://api.github.com/users/sangeethabal/repos",
"events_url": "https://api.github.com/users/sangeethabal/events{/privacy}",
"received_events_url": "https://api.github.com/users/sangeethabal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Oh, looks like something went wrong in your rebase (see the diff showing lots of files). You can either force-push a commit (with --force) to repare the history for git, or close this PR and open a fresh one."
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
When running HuggingFace BERT (any size) fine-tuning tutorial with transformers version >= 4.21.0 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, I see NaNs in the loss after the first step.
# What does this PR do?
This PR addresses the issue where the model code passes a value that is out of range for XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, so the conversion would cast it to -inf.
The NaNs likely come from the transformers library change: https://github.com/huggingface/transformers/pull/17306 . This PR replaced many lines which used to be -float(inf) (or other small constants) with torch.finfo().min. For torch.float32 the min value is -3.4028234663852886e+38 which is smaller than the bfloat16 minimum of -3.3895313892515355e+38. So the problem is that torch.finfo(torch.float32).min = -3.4028234663852886e+38 gets converted to -inf. When the original encoder_extended_attention_mask is 1, then encoder_extended_attention_mask becomes (1.0 - 1.0 ) * -inf which becomes NaN (via IEEE rule Inf * 0.0 = NaN).
This PR ensures torch.finfo(torch.bfloat16).min = -3.3895313892515355e+38 and not -inf. Then the results would not have Nans.
The following lines checks for XLA_USE_BF16 or XLA_DOWNCAST_BF16 environment variable and sets the dtype accordingly:
```
if is_torch_tpu_available():
if os.environ.get("XLA_USE_BF16"):
return torch.bfloat16
if os.environ.get("XLA_DOWNCAST_BF16"):
if t.dtype == torch.float:
return torch.bfloat16
if t.dtype == torch.double:
return torch.float32
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20562/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20562",
"html_url": "https://github.com/huggingface/transformers/pull/20562",
"diff_url": "https://github.com/huggingface/transformers/pull/20562.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20562.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20561
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20561/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20561/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20561/events
|
https://github.com/huggingface/transformers/pull/20561
| 1,473,577,439
|
PR_kwDOCUB6oc5ELgv4
| 20,561
|
Fix code sample in preprocess
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
MEMBER
| null |
This PR fixes the code sample to use the new `ImageProcessor` in the code sample for preprocessing an image.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20561/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20561",
"html_url": "https://github.com/huggingface/transformers/pull/20561",
"diff_url": "https://github.com/huggingface/transformers/pull/20561.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20561.patch",
"merged_at": 1670269783000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20560
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20560/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20560/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20560/events
|
https://github.com/huggingface/transformers/pull/20560
| 1,473,398,092
|
PR_kwDOCUB6oc5EK6ZY
| 20,560
|
Fix link to table transformer detection microsoft model
|
{
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,691
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Similar to #20558 the linking to the `microsoft/table-transformer-detection` model seems to be outdated or it has a typo and redirects to a 404.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20560/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20560",
"html_url": "https://github.com/huggingface/transformers/pull/20560",
"diff_url": "https://github.com/huggingface/transformers/pull/20560.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20560.patch",
"merged_at": 1670258608000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20559
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20559/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20559/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20559/events
|
https://github.com/huggingface/transformers/pull/20559
| 1,473,369,261
|
PR_kwDOCUB6oc5EK0CU
| 20,559
|
Split autoclasses on modality
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
MEMBER
| null |
This PR groups `AutoModel`, `TFAutoModel` and `FlaxAutoModel` by modality to make them easier to discover.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20559/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20559",
"html_url": "https://github.com/huggingface/transformers/pull/20559",
"diff_url": "https://github.com/huggingface/transformers/pull/20559.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20559.patch",
"merged_at": 1670272124000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20558
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20558/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20558/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20558/events
|
https://github.com/huggingface/transformers/pull/20558
| 1,473,336,775
|
PR_kwDOCUB6oc5EKs0S
| 20,558
|
Fix link to swin transformers v2 microsoft model
|
{
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,691
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
The link `https://huggingface.co/microsoft/swinv2_tiny_patch4_windows8_256/` redirects to a 404. The actual link is https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256.
At the same time, loading the configuration using
```python3
from transformers import AutoConfig
config = AutoConfig.from_pretrained("microsoft/swinv2_tiny_patch4_windows8_256")
```
Returns
```
HTTPError: 401 Client Error: Unauthorized for url:
https://huggingface.co/microsoft/swinv2_tiny_patch4_windows8_256/resolve/main/config.json
```
As the link is not valid, this change fixes it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20558/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20558",
"html_url": "https://github.com/huggingface/transformers/pull/20558",
"diff_url": "https://github.com/huggingface/transformers/pull/20558.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20558.patch",
"merged_at": 1670258584000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20557
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20557/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20557/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20557/events
|
https://github.com/huggingface/transformers/pull/20557
| 1,473,318,186
|
PR_kwDOCUB6oc5EKov3
| 20,557
|
Fix link to Swin Model contributor novice03
|
{
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,691
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes `>` typo in `https://huggingface.co/novice03>` link that redirects to 404 not found site.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20557/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20557",
"html_url": "https://github.com/huggingface/transformers/pull/20557",
"diff_url": "https://github.com/huggingface/transformers/pull/20557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20557.patch",
"merged_at": 1670258549000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20556
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20556/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20556/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20556/events
|
https://github.com/huggingface/transformers/pull/20556
| 1,473,297,391
|
PR_kwDOCUB6oc5EKkQQ
| 20,556
|
Fix flax GPT-J-6B linking model in tests
|
{
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the loading of the model `EleutherAI/gpt-j-6B` as the current code links `EleutherAI/gptj-6B` which does not exist and ends up failing the test.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20556/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20556",
"html_url": "https://github.com/huggingface/transformers/pull/20556",
"diff_url": "https://github.com/huggingface/transformers/pull/20556.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20556.patch",
"merged_at": 1670245206000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20555
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20555/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20555/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20555/events
|
https://github.com/huggingface/transformers/pull/20555
| 1,473,201,497
|
PR_kwDOCUB6oc5EKPa-
| 20,555
|
flan-t5.mdx: fix link to large model
|
{
"login": "szhublox",
"id": 91105156,
"node_id": "MDQ6VXNlcjkxMTA1MTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/91105156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szhublox",
"html_url": "https://github.com/szhublox",
"followers_url": "https://api.github.com/users/szhublox/followers",
"following_url": "https://api.github.com/users/szhublox/following{/other_user}",
"gists_url": "https://api.github.com/users/szhublox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/szhublox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szhublox/subscriptions",
"organizations_url": "https://api.github.com/users/szhublox/orgs",
"repos_url": "https://api.github.com/users/szhublox/repos",
"events_url": "https://api.github.com/users/szhublox/events{/privacy}",
"received_events_url": "https://api.github.com/users/szhublox/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
## Before submitting
- [*] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
Documentation: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20555/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20555",
"html_url": "https://github.com/huggingface/transformers/pull/20555",
"diff_url": "https://github.com/huggingface/transformers/pull/20555.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20555.patch",
"merged_at": 1670005667000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20554
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20554/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20554/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20554/events
|
https://github.com/huggingface/transformers/pull/20554
| 1,473,180,477
|
PR_kwDOCUB6oc5EKKz7
| 20,554
|
Cleanup config attrs
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
These are vision models, and they don't form encoder-decoder themselves (unlike some text models like `Bart`).
Furthermore, the current default value (specified in each config class `__init__`) for these configs are `False`, which is the same as the default value in `PretrainedConfig`. So we can just remove it from the parameters, and rely on `**kwargs` in the call to `super.__init__` .
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20554/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20554",
"html_url": "https://github.com/huggingface/transformers/pull/20554",
"diff_url": "https://github.com/huggingface/transformers/pull/20554.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20554.patch",
"merged_at": 1670249530000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20553
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20553/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20553/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20553/events
|
https://github.com/huggingface/transformers/pull/20553
| 1,472,847,202
|
PR_kwDOCUB6oc5EJB-3
| 20,553
|
exclude jit time from the speed metric calculation of evaluation and β¦
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger @jianan-gu please have a review",
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger done, please have a review of it."
] | 1,669
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
β¦prediction
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20553/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20553",
"html_url": "https://github.com/huggingface/transformers/pull/20553",
"diff_url": "https://github.com/huggingface/transformers/pull/20553.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20553.patch",
"merged_at": 1670330222000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20552
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20552/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20552/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20552/events
|
https://github.com/huggingface/transformers/issues/20552
| 1,472,810,672
|
I_kwDOCUB6oc5XyU6w
| 20,552
|
`TrainingArguments` `lr_scheduler_type="cosine_with_restarts"` can/does not pass a `num_cycles` argument to `get_cosine_with_hard_restarts_schedule_with_warmup()`
|
{
"login": "hogru",
"id": 3949272,
"node_id": "MDQ6VXNlcjM5NDkyNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3949272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hogru",
"html_url": "https://github.com/hogru",
"followers_url": "https://api.github.com/users/hogru/followers",
"following_url": "https://api.github.com/users/hogru/following{/other_user}",
"gists_url": "https://api.github.com/users/hogru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hogru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hogru/subscriptions",
"organizations_url": "https://api.github.com/users/hogru/orgs",
"repos_url": "https://api.github.com/users/hogru/repos",
"events_url": "https://api.github.com/users/hogru/events{/privacy}",
"received_events_url": "https://api.github.com/users/hogru/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Yes, there is no argument to pass that information, so in this instance you should either build the scheduler yourself and pass it, or subclass the `Trainer` to override the `create_scheduler` method, whichever you prefer.\r\n\r\nIn both cases the formula you passed should give the good number of training steps!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I believe this should be fixed at a certain point!",
"Is there any example to build scheduler using ```get_cosine_with_hard_restarts_schedule_with_warmup()``` and pass it to the Trainer by including it in the ```TrainingArguments```? "
] | 1,669
| 1,705
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.12.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I paste some dummy code but I think the explanation is more important (unless I have overlooked something): The `lr_scheduler_type="cosine_with_restarts"` that I pass to the `TrainingArguments` is used to call `get_scheduler()` in `optimization.py`. There it's mapped to `get_cosine_with_hard_restarts_schedule_with_warmup()`, but without a `num_cycles` argument, defaulting to `1`, i.e. it behaves like the `cosine` option.
Probably I could build the scheduler myself and pass it to the `Trainer`, but then I need to calculate the `num_trainings_steps` myself, correct? If true, would `len(train_dataset) * num_epochs // batch_size // gradient_accumulation_steps` be a decent approximation?
```python
args = TrainingArguments(
output_dir="./checkpoints",
per_device_train_batch_size=128,
per_device_eval_batch_size=128,
evaluation_strategy="steps",
eval_steps=1_000,
logging_steps=1_000,
gradient_accumulation_steps=8,
num_train_epochs=50,
weight_decay=0.1,
warmup_steps=5_000,
lr_scheduler_type="cosine_with_restarts", # that's actually the only relevant line
learning_rate=5e-4,
save_steps=1_000,
)
trainer = Trainer(
model=model,
tokenizer=tokenizer,
args=args,
data_collator=data_collator,
train_dataset=tokenized_data["train"],
eval_dataset=tokenized_data["validation"],
)
trainer.train()
```
### Expected behavior
Passing `lr_scheduler_type="cosine_with_restarts"` should allow for an additional parameter `num_cycles` in `TrainingArguments` which should then be passed on to `get_cosine_with_hard_restarts_schedule_with_warmup()`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20552/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20552/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20551
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20551/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20551/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20551/events
|
https://github.com/huggingface/transformers/pull/20551
| 1,472,723,663
|
PR_kwDOCUB6oc5EImyQ
| 20,551
|
Add entries to `FEATURE_EXTRACTOR_MAPPING_NAMES`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Add entries to `FEATURE_EXTRACTOR_MAPPING_NAMES`
Not sure if there was any reason not to add these entries in `FEATURE_EXTRACTOR_MAPPING_NAMES`.
Furthermore, without these entries, we get some test failures for the (WIP) improved pipeline tests, because we now can generate tiny models for these config classes with the corresponding tokenizer/processor. (Previously these couldn't be generated).
The failures are because this line
https://github.com/huggingface/transformers/blob/cc3d0e1b017dbb8dcbba1eb01be77aef7bacee1a/tests/pipelines/test_pipelines_feature_extraction.py#L182
is not able to skip relevant tests for these configs/models.
**Remark: I am going to add them to `TOKENIZER_MAPPING_NAMES` too**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20551/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20551",
"html_url": "https://github.com/huggingface/transformers/pull/20551",
"diff_url": "https://github.com/huggingface/transformers/pull/20551.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20551.patch",
"merged_at": 1670249418000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20550
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20550/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20550/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20550/events
|
https://github.com/huggingface/transformers/pull/20550
| 1,472,592,418
|
PR_kwDOCUB6oc5EIKXo
| 20,550
|
Add BiT + ViT hybrid
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks so much @sgugger for your review ! \r\nI should have updated everything and the main models are now up:\r\n- https://huggingface.co/google/vit-hybrid-base-bit-384 \r\n- https://huggingface.co/google/bit-50"
] | 1,669
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds ViT hybrid to the library. As ViT hybrid uses BiT as backbone, this PR also adds BiT as a standalone model.
BiT itself is very similar to a ResNetv2, except that it replaces batch norm layers by group norm and uses "weight standardized" convolutional layers.
To do:
- [x] add image processors
- [ ] add tests for image processors (cc @amyeroberts can I directly add test_modeling_image_processor_xxx.py ?)
- [ ] transfer all checkpoints
- [x] add integration tests
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20550/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20550/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20550",
"html_url": "https://github.com/huggingface/transformers/pull/20550",
"diff_url": "https://github.com/huggingface/transformers/pull/20550.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20550.patch",
"merged_at": 1670407419000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20549
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20549/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20549/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20549/events
|
https://github.com/huggingface/transformers/issues/20549
| 1,472,542,612
|
I_kwDOCUB6oc5XxTeU
| 20,549
|
processor.model_input_names doesn't work as it should be
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Also cc'ing @sgugger and @amyeroberts here",
"There is no generic call method of the processors, like there is for the tokenizers, so to enforce that `model_input_names` only returns the keys you want, it's up to you to have the call method of your processor filter those outputs.\r\n\r\nAs for the second point, `model_input_names` are linked to an architecture, and as such they are a class variable. They are not supposed to be changed by a user, and it's completely natural that saving/reloading does not save that change.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@amyeroberts would you like to add this functionality at some point? Also shouldn't `tokenizer.model_input_names` for instance work after re-instantiating from the hub?",
"I am not sure what was unclear in my comment above. This functionality cannot exist since there is no generic call method for the processor mixin."
] | 1,669
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
### System Info
Transformers, main branch
### Who can help?
@SaulLu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Currently, processors like CLIPProcessor have a model_input_names attribute, but it doesn't have any effect on which keys are outputted in the BatchEncoding.
To reproduce:
```
# install transformers from my branch, see https://github.com/huggingface/transformers/pull/20295
from PIL import Image
import requests
from transformers import GITProcessor
processor = GITProcessor.from_pretrained("nielsr/git-base")
print(processor.model_input_names)
# this prints ['input_ids', 'attention_mask', 'pixel_values']
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
for key, value in inputs.items():
print(key, value.shape)
```
This prints:
```
input_ids torch.Size([2, 7])
token_type_ids torch.Size([2, 7])
attention_mask torch.Size([2, 7])
pixel_values torch.Size([1, 3, 224, 224])
```
=> as can be seen, token_type_ids are included here, which shouldn't be the case.
In addition, it seems model_input_names doesn't get reflected when pushing a tokenizer to the hub and reloading. To reproduce:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
print(tokenizer.model_input_names)
# update model input names (let's say we don't want token type ids)
tokenizer.model_input_names = ['input_ids', 'attention_mask']
tokenizer.push_to_hub("nielsr/test")
# reload
tokenizer = AutoTokenizer.from_pretrained("nielsr/test")
print(tokenizer.model_input_names)
```
### Expected behavior
model_input_names should work appropriately for both tokenizers and processors, making sure only keys which are in this list are included in the BatchEncoding.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20549/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20548
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20548/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20548/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20548/events
|
https://github.com/huggingface/transformers/issues/20548
| 1,472,396,458
|
I_kwDOCUB6oc5Xwvyq
| 20,548
|
Maked Patch in ViT and VilT
|
{
"login": "guanhdrmq",
"id": 81207745,
"node_id": "MDQ6VXNlcjgxMjA3NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/81207745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guanhdrmq",
"html_url": "https://github.com/guanhdrmq",
"followers_url": "https://api.github.com/users/guanhdrmq/followers",
"following_url": "https://api.github.com/users/guanhdrmq/following{/other_user}",
"gists_url": "https://api.github.com/users/guanhdrmq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guanhdrmq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guanhdrmq/subscriptions",
"organizations_url": "https://api.github.com/users/guanhdrmq/orgs",
"repos_url": "https://api.github.com/users/guanhdrmq/repos",
"events_url": "https://api.github.com/users/guanhdrmq/events{/privacy}",
"received_events_url": "https://api.github.com/users/guanhdrmq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThis comment is actually outdated as currently, self-supervised pre-training beats supervised pre-training, with models like [BEiT](https://huggingface.co/docs/transformers/model_doc/beit), [MAE](https://huggingface.co/docs/transformers/model_doc/vit_mae) as well as SimMIM. \r\n\r\nAll 3 are based on masking patches for ViT. We do provide a [ViTForMaskedImageModeling](https://huggingface.co/docs/transformers/model_doc/vit#transformers.ViTForMaskedImageModeling) class exactly for this purpose. It also comes with a pre-training script, allowing you to pre-train a model for masked image modeling yourself on custom data: https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining.\r\n\r\nWe should update that comment ;) feel free to open a PR",
"Thank you very much for your answer. Can I use ViTForMaskedImageModeling in VilT as well? \r\nappreciate for your valuable answer.",
"> Hi,\r\n> \r\n> This comment is actually outdated as currently, self-supervised pre-training beats supervised pre-training, with models like [BEiT](https://huggingface.co/docs/transformers/model_doc/beit), [MAE](https://huggingface.co/docs/transformers/model_doc/vit_mae) as well as SimMIM.\r\n> \r\n> All 3 are based on masking patches for ViT. We do provide a [ViTForMaskedImageModeling](https://huggingface.co/docs/transformers/model_doc/vit#transformers.ViTForMaskedImageModeling) class exactly for this purpose. It also comes with a pre-training script, allowing you to pre-train a model for masked image modeling yourself on custom data: https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining.\r\n> \r\n> We should update that comment ;) feel free to open a PR\r\n\r\nHi Niels Rogge,\r\n\r\nThanks for replying. Appreciate for your valuable feedback.\r\n\r\nSo another problem is:\r\n\r\nCan you add this function ViTForMaskedImageModeling in VilT as well? Not sure if it is ok.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @guanhdrmq, ViLT has its own pre-training objectives, which are different from `ViTForMaskedImageModeling`. Hence this would require a new `ViltForPreTraining` class which includes all heads used during the pre-training of ViLT."
] | 1,669
| 1,673
| 1,673
|
NONE
| null |
### System Info
Hi,
I did check in vit docs from this link thttps://huggingface.co/transformers/v4.6.0/model_doc/vit.html
It said "The best results are obtained with supervised pre-training, which is not the case in NLP. The authors also performed an experiment with a self-supervised pre-training objective, namely masked patched prediction (inspired by masked language modeling). With this approach, the smaller ViT-B/16 model achieves 79.9% accuracy on ImageNet, a significant improvement of 2% to training from scratch, but still 4% behind supervised pre-training.
Not sure, Is there a mask function for image patch for ViT? If not, can you add this function in ViT or VilT?
It would be gratefully. Many thanks
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/lucidrains/vit-pytorch/issues/97
mpp_trainer = MPP(
transformer=model,
patch_size=32,
dim=1024,
mask_prob=0.15, # probability of using token in masked prediction task
random_patch_prob=0.30, # probability of randomly replacing a token being used for mpp
replace_prob=0.50, # probability of replacing a token being used for mpp with the mask token
)
### Expected behavior
Hope add this masked patch function in ViT and VilT
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20548/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20547
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20547/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20547/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20547/events
|
https://github.com/huggingface/transformers/pull/20547
| 1,472,381,046
|
PR_kwDOCUB6oc5EHczh
| 20,547
|
Replace `set-output` by `$GITHUB_OUTPUT`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Apply the suggestion in [GitHub Actions: Deprecating save-state and set-output commands](https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/) to avoid deprecated actions.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20547/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20547",
"html_url": "https://github.com/huggingface/transformers/pull/20547",
"diff_url": "https://github.com/huggingface/transformers/pull/20547.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20547.patch",
"merged_at": 1670261114000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20546
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20546/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20546/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20546/events
|
https://github.com/huggingface/transformers/pull/20546
| 1,472,379,213
|
PR_kwDOCUB6oc5EHcaD
| 20,546
|
Install natten with CUDA version
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
The PR #20511 install `natten`, but on GPU machines, we need install it with CUDA supported versions.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20546/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20546",
"html_url": "https://github.com/huggingface/transformers/pull/20546",
"diff_url": "https://github.com/huggingface/transformers/pull/20546.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20546.patch",
"merged_at": 1670249312000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20545
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20545/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20545/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20545/events
|
https://github.com/huggingface/transformers/issues/20545
| 1,472,362,095
|
I_kwDOCUB6oc5XwnZv
| 20,545
|
add MeMViT model
|
{
"login": "fcakyon",
"id": 34196005,
"node_id": "MDQ6VXNlcjM0MTk2MDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fcakyon",
"html_url": "https://github.com/fcakyon",
"followers_url": "https://api.github.com/users/fcakyon/followers",
"following_url": "https://api.github.com/users/fcakyon/following{/other_user}",
"gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions",
"organizations_url": "https://api.github.com/users/fcakyon/orgs",
"repos_url": "https://api.github.com/users/fcakyon/repos",
"events_url": "https://api.github.com/users/fcakyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/fcakyon/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi @fcakyon, MeMViT definitely seems interesting and we would be happy to see it added to transformers!\r\n\r\nIf you haven't done so, you can start by taking a look at our [existing video classification models](https://huggingface.co/models?pipeline_tag=video-classification&sort=downloads) to see if there are any re-usable components you can copy paste and use for MeMViT (preprocessing, model modules, etc.).\r\n\r\nThe best way to add a new model is to start with the `transformers-cli add-new-model` or `transformers-cli add-new-model-like` command, which initializes all the model files and ensures the new model can be properly imported. You can learn more about it over [here.](https://huggingface.co/docs/transformers/add_new_model)\r\n\r\nFeel free to ping me or @NielsRogge if you get stuck or have questions :)\r\n",
"Thank you for the response @alaradirik. Just covered up the timesformer pr: https://github.com/huggingface/transformers/pull/18908\r\n\r\nI will be starting the MeMViT implementation late this week π ",
"I am sorry that I won't be able to work on such a PR in the short future due to my time not allowing it. I have a lot of work to do for my Ph.D. If anyone else is willing to work on it, he/she is free to do π ",
"Hello, I would like to work upon adding this model",
"@fcakyon no problem at all :)\r\n\r\n@Sandstorm831 sure, please feel free to start working on it, you can ping me or @NielsRogge if you run into issues or have questions about the library in general.",
"Hi @alaradirik I would like to contribute to this model.",
"Hi @alaradirik I and @Sandstorm831 are working together towards contributing to this model.",
"hello, any status update on this? thanks! @alaradirik ",
"sorry for delayed response\r\ndue to no sustainable progress in work I and @Sandstorm831 are not working on it as of now!\r\n@shivanimall you may start working on this issue\r\nthank you"
] | 1,669
| 1,702
| null |
CONTRIBUTOR
| null |
### Model description
[MeMViT, CVPR 2022](https://arxiv.org/abs/2201.08383) is the most efficient transformer-based video understanding model, and META AI released it. Its efficient online attention calculation mechanism decreases computation by 30 times compared to SOTA video classification models.
It would be an excellent addition to the `transformers` library considering it is the current SOTA on AVA, EPIC-Kitchens-100 action classification, and action anticipation datasets.
### Your contribution
I want to work on adding this architecture to the HuggingFace.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Source code: https://github.com/facebookresearch/MeMViT
Weight files: https://github.com/facebookresearch/MeMViT#model-checkpoints
cc: @NielsRogge @alaradirik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20545/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/20544
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20544/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20544/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20544/events
|
https://github.com/huggingface/transformers/pull/20544
| 1,472,338,406
|
PR_kwDOCUB6oc5EHTn4
| 20,544
|
ESM openfold_utils type hints
|
{
"login": "ringohoffman",
"id": 27844407,
"node_id": "MDQ6VXNlcjI3ODQ0NDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ringohoffman",
"html_url": "https://github.com/ringohoffman",
"followers_url": "https://api.github.com/users/ringohoffman/followers",
"following_url": "https://api.github.com/users/ringohoffman/following{/other_user}",
"gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions",
"organizations_url": "https://api.github.com/users/ringohoffman/orgs",
"repos_url": "https://api.github.com/users/ringohoffman/repos",
"events_url": "https://api.github.com/users/ringohoffman/events{/privacy}",
"received_events_url": "https://api.github.com/users/ringohoffman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Wow, this is really comprehensive! All of your edits seem good, and thanks for catching those duplicate functions!\r\n\r\nThe code is failing some of our code style checks, but I believe I can fix that for you, hang on!",
"I think the other issues are just old issues with our repo - they'll be fixed if you pull from upstream on your fork's `main` branch in the GitHub UI and then rebase your branch onto that, followed by a force push",
"@ringohoffman Looks good to me now! Are you okay with me merging it?",
"> @ringohoffman Looks good to me now! Are you okay with me merging it?\r\n\r\nI'm good if you are!"
] | 1,669
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR generally adds type hints for the files located at `src/transformers/models/esm/openfold_utils/`.
0. add function/method parameter type hints where missing; add type info on collections
1. export `dict_multimap`, `flatten_final_dims`, `permute_final_dims` in `__init__.py` since these functions are currently duplicated in [src/transformers/models/esm/modeling_esmfold.py](https://github.com/huggingface/transformers/blob/2e17db8a8626baeea7efd6f2700be863f026699c/src/transformers/models/esm/modeling_esmfold.py#L218-L238); exporting these from `openfold_utils` should allow us to remove these duplicates
2. refactor `type(x) is y` to use the builtin `isinstance(x, y)`
3. refactor to avoid reassignment to the same variable with a different type (this is frowned upon by type checkers) by using multiple variables / combining expressions to avoid reassignment
4. add `assert` statements to to narrow types
5. add a `FIXME` statement at an apparent bug in [`protein.py`](https://github.com/huggingface/transformers/pull/20544/files#diff-b7388405b8a9b1877a3eeb6b6941091f68e321717beec3abb7727cd3114115bfR84) in which string mutation is attempted
6. various minor refactors
<!-- Remove if not applicable -->
Related: https://github.com/huggingface/transformers/issues/16059
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20544/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20544",
"html_url": "https://github.com/huggingface/transformers/pull/20544",
"diff_url": "https://github.com/huggingface/transformers/pull/20544.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20544.patch",
"merged_at": 1670257395000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20543
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20543/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20543/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20543/events
|
https://github.com/huggingface/transformers/issues/20543
| 1,472,232,657
|
I_kwDOCUB6oc5XwHzR
| 20,543
|
CLIPProcessor.from_pretrained is None
|
{
"login": "troilus-canva",
"id": 55678940,
"node_id": "MDQ6VXNlcjU1Njc4OTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/55678940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/troilus-canva",
"html_url": "https://github.com/troilus-canva",
"followers_url": "https://api.github.com/users/troilus-canva/followers",
"following_url": "https://api.github.com/users/troilus-canva/following{/other_user}",
"gists_url": "https://api.github.com/users/troilus-canva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/troilus-canva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/troilus-canva/subscriptions",
"organizations_url": "https://api.github.com/users/troilus-canva/orgs",
"repos_url": "https://api.github.com/users/troilus-canva/repos",
"events_url": "https://api.github.com/users/troilus-canva/events{/privacy}",
"received_events_url": "https://api.github.com/users/troilus-canva/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Could you try:\r\n- updating Transforemrs to the latest version\r\n- make sure you have all optional dependencies necessary for CLIP (PILlow mainly)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,669
| 1,673
| 1,673
|
NONE
| null |
### System Info
transformers version: 4.20.1
### Who can help?
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import CLIPProcessor, CLIPModel
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
### Expected behavior
returned processor without `TypeError: 'NoneType' object is not callable`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20543/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20542
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20542/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20542/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20542/events
|
https://github.com/huggingface/transformers/issues/20542
| 1,472,186,261
|
I_kwDOCUB6oc5Xv8eV
| 20,542
|
cannot import name 'ReduceOp' from 'torch.distributed'
|
{
"login": "HugeBob",
"id": 37388141,
"node_id": "MDQ6VXNlcjM3Mzg4MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37388141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HugeBob",
"html_url": "https://github.com/HugeBob",
"followers_url": "https://api.github.com/users/HugeBob/followers",
"following_url": "https://api.github.com/users/HugeBob/following{/other_user}",
"gists_url": "https://api.github.com/users/HugeBob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HugeBob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HugeBob/subscriptions",
"organizations_url": "https://api.github.com/users/HugeBob/orgs",
"repos_url": "https://api.github.com/users/HugeBob/repos",
"events_url": "https://api.github.com/users/HugeBob/events{/privacy}",
"received_events_url": "https://api.github.com/users/HugeBob/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Changed from using SegformerForSemanticSegmentation to using AutoModelForSemanticSegmentation, the import now works fine but loading the pretrained model does not.\r\n\r\n```\r\nfrom transformers import AutoFeatureExtractor, AutoModelForSemanticSegmentation\r\nfeature_extractor = AutoFeatureExtractor.from_pretrained(\"nvidia/segformer-b4-finetuned-ade-512-512\")\r\nsegment_model = AutoModelForSemanticSegmentation.from_pretrained(\"segments-tobias/segformer-b0-finetuned-segments-sidewalk\")\r\n```\r\n\r\nSame error, stack trace pasted below\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py\", line 1002, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \"/usr/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 848, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/models/segformer/modeling_segformer.py\", line 28, in <module>\r\n from ...modeling_utils import PreTrainedModel\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py\", line 79, in <module>\r\n from accelerate import dispatch_model, infer_auto_device_map, init_empty_weights\r\n File \"/usr/local/lib/python3.8/dist-packages/accelerate/__init__.py\", line 7, in <module>\r\n from .accelerator import Accelerator\r\n File \"/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py\", line 27, in <module>\r\n from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state\r\n File \"/usr/local/lib/python3.8/dist-packages/accelerate/checkpointing.py\", line 24, in <module>\r\n from .utils import (\r\n File \"/usr/local/lib/python3.8/dist-packages/accelerate/utils/__init__.py\", line 68, in <module>\r\n from .operations import (\r\n File \"/usr/local/lib/python3.8/dist-packages/accelerate/utils/operations.py\", line 25, in <module>\r\n from torch.distributed import ReduceOp\r\nImportError: cannot import name 'ReduceOp' from 'torch.distributed' (/usr/local/lib/python3.8/dist-packages/torch/distributed/__init__.py)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py\", line 445, in from_pretrained\r\n model_class = _get_model_class(config, cls._model_mapping)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py\", line 359, in _get_model_class\r\n supported_models = model_mapping[type(config)]\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py\", line 564, in __getitem__\r\n return self._load_attr_from_module(model_type, model_name)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py\", line 578, in _load_attr_from_module\r\n return getattribute_from_module(self._modules[module_name], attr)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py\", line 534, in getattribute_from_module\r\n if hasattr(module, attr):\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py\", line 992, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py\", line 1004, in _get_module\r\n raise RuntimeError(\r\nRuntimeError: Failed to import transformers.models.segformer.modeling_segformer because of the following error (look up to see its traceback):\r\ncannot import name 'ReduceOp' from 'torch.distributed' (/usr/local/lib/python3.8/dist-packages/torch/distributed/__init__.py)\r\n```",
"Solved by upgrading the PyTorch version 1.13.0, had to build from source with USE_DISTRIBUTED=1",
"You ever think a problem is solved but then you later figure out that your problem actually isn't solved? Yea I didn't get the same error as last time because the new PyTorch build didn't have CUDA enabled, when using a proper CUDA enabled PyTorch install I do still get this error. Gonna drop the ping once more as this likely would have gotten lost (sorry for the mess) @LysandreJik ",
"Reverting to PyTorch 1.11.0 resolved this problem but gives the following warning:\r\n\r\n```\r\n/usr/local/lib/python3.8/dist-packages/transformers/models/segformer/image_processing_segformer.py:102: FutureWarning: The `reduce_labels` parameter is deprecated and will be removed in a future version. Please use `do_reduce_labels` instead.\r\n```\r\n\r\nI presume the Segformer model in Transformers simply relies on a portion of PyTorch that was deprecated starting at PyTorch version 1.12.0?",
"No this deprecation comes from the Transformers library, you should use the argument indicated.",
"But the error happens just when trying to initialize a Segformer model, would the solution be to update Transformers?",
"Error persists on Transformers 4.25.1",
"It's not an error, just a warning or are you still having the original issue?",
"I get the original issue on PyTorch 1.12.0+, it works fine on PyTorch 1.11.0",
"Any idea if the SegformerForSemanticSegmentation will be updated to support PyTorch 1.12 or has it been abandoned?",
"There is nothing wrong with segformer, the problem stems from your PyTorch install. We have tried to reproduce your issue with @muellerzr but with all versions of PyTorch from 1.11 to 1.13 there is nothing wrong with the import that fails on your setup.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I still have the same issue with :\r\npyTorch : 2.0.0+cpu\r\npython : 3.9.1\r\ntransformers : 4.25.1\r\n\r\nI found this issue by trying this clothes segmentation :\r\nhttps://huggingface.co/mattmdjaga/segformer_b2_clothes\r\n\r\nError :\r\n\r\n```\r\n/lib/python3.9/site-packages/transformers/models/segformer/image_processing_segformer.py:102: FutureWarning: The `reduce_labels` parameter is deprecated and will be removed in a future version. Please use `do_reduce_labels` instead.\r\n warnings.warn(\r\n```\r\n\r\nOkay it is a warning, but the pipe seems broken at the end."
] | 1,669
| 1,684
| 1,673
|
NONE
| null |
### System Info
Transformers version: 4.21.2
Platform: NVIDIA Jetson Xavier NX
Python version: 3.8.10
PyTorch version: '1.13.0a0+936e9305.nv22.11'
Errors out when trying to import SegformerForSemanticSegmentation, came back to test my code after not using it for a long while, changed nothing about the environment (didn't even turn the machine on) and it no longer works. Been trying to figure out what the problem is but finally gave up and decided to check if anyone else was having the problem, didn't see anything so I'm sure I'm just doing something stupid.
`from transformers import SegformerForSemanticSegmentation`
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 1002, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/usr/local/lib/python3.8/dist-packages/transformers/models/segformer/modeling_segformer.py", line 28, in <module>
from ...modeling_utils import PreTrainedModel
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 79, in <module>
from accelerate import dispatch_model, infer_auto_device_map, init_empty_weights
File "/usr/local/lib/python3.8/dist-packages/accelerate/__init__.py", line 7, in <module>
from .accelerator import Accelerator
File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 27, in <module>
from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
File "/usr/local/lib/python3.8/dist-packages/accelerate/checkpointing.py", line 24, in <module>
from .utils import (
File "/usr/local/lib/python3.8/dist-packages/accelerate/utils/__init__.py", line 68, in <module>
from .operations import (
File "/usr/local/lib/python3.8/dist-packages/accelerate/utils/operations.py", line 25, in <module>
from torch.distributed import ReduceOp
ImportError: cannot import name 'ReduceOp' from 'torch.distributed' (/usr/local/lib/python3.8/dist-packages/torch/distributed/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 993, in __getattr__
value = getattr(module, name)
File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 992, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 1004, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.segformer.modeling_segformer because of the following error (look up to see its traceback):
cannot import name 'ReduceOp' from 'torch.distributed' (/usr/local/lib/python3.8/dist-packages/torch/distributed/__init__.py)
```
### Who can help?
@LysandreJik SegformerForSemanticSegmentation
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`from transformers import SegformerForSemanticSegmentation`
### Expected behavior
I would expect to be able to import SegformerForSemanticSegmentation and use the class to load my already existing Segformer model. My script used to work but now errors out at the import after not touching the machine for at least a month (didn't even turn it on)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20542/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20541
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20541/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20541/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20541/events
|
https://github.com/huggingface/transformers/issues/20541
| 1,472,185,934
|
I_kwDOCUB6oc5Xv8ZO
| 20,541
|
Pretraing T5 model with run_t5_mlm_flax.py script does not support distributed training with deepspeed
|
{
"login": "saimunikoti",
"id": 38624967,
"node_id": "MDQ6VXNlcjM4NjI0OTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/38624967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saimunikoti",
"html_url": "https://github.com/saimunikoti",
"followers_url": "https://api.github.com/users/saimunikoti/followers",
"following_url": "https://api.github.com/users/saimunikoti/following{/other_user}",
"gists_url": "https://api.github.com/users/saimunikoti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saimunikoti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saimunikoti/subscriptions",
"organizations_url": "https://api.github.com/users/saimunikoti/orgs",
"repos_url": "https://api.github.com/users/saimunikoti/repos",
"events_url": "https://api.github.com/users/saimunikoti/events{/privacy}",
"received_events_url": "https://api.github.com/users/saimunikoti/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@ArthurZucker could you take a look? :-) ",
"DeepSpeed only supports PyTorch and the script you mention is for Flax. I don't think there is anything that can be done ;-)",
"Thanks for letting me know. Is there anything for distributed training on Flax models "
] | 1,669
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-3.10.0-1127.18.2.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.8.2+cu111
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.1
- Jax version: 0.3.25
- JaxLib version: 0.3.25
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patil-suraj
@patrickvonplaten
@stas
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
sh file to pretrain T5 model
deepspeed --hostfile hostfile \
--master_port <fill in> \
run_t5_mlm_flax.py \
--deepspeed deepspeed_configs.json \
--train_file <fill in> \
--output_dir <fill in> \
--model_name_or_path=t5-small \
--do_train \
--max_seq_length="512" \
--num_train_epochs=1 \
--save_steps=100 \
--per_device_train_batch_size=4 \
--warmup_steps=100 \
--logging_steps=100 \
--overwrite_output_dir
```
ValueError: Some specified arguments are not used by the HfArgumentParser: ['--local_rank=0']
```
### Expected behavior
It seems run_t5_mlm_flax.py uses its own TrainingArguments file which does not define "local_rank" and "deepspeed"attributes (unlike transformers.TrainingArguments which defined these variables).
run_t5_mlm_flax.py should be configured for these attributes in order to train in a distributed manner.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20541/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20540
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20540/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20540/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20540/events
|
https://github.com/huggingface/transformers/pull/20540
| 1,471,918,754
|
PR_kwDOCUB6oc5EF5YF
| 20,540
|
run_speech_recognition_seq2seq.py: add `cache_dir` to load_dataset()
|
{
"login": "eschmidbauer",
"id": 7139998,
"node_id": "MDQ6VXNlcjcxMzk5OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7139998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eschmidbauer",
"html_url": "https://github.com/eschmidbauer",
"followers_url": "https://api.github.com/users/eschmidbauer/followers",
"following_url": "https://api.github.com/users/eschmidbauer/following{/other_user}",
"gists_url": "https://api.github.com/users/eschmidbauer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eschmidbauer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eschmidbauer/subscriptions",
"organizations_url": "https://api.github.com/users/eschmidbauer/orgs",
"repos_url": "https://api.github.com/users/eschmidbauer/repos",
"events_url": "https://api.github.com/users/eschmidbauer/events{/privacy}",
"received_events_url": "https://api.github.com/users/eschmidbauer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @sanchit-gandhi "
] | 1,669
| 1,670
| 1,670
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20540/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20540",
"html_url": "https://github.com/huggingface/transformers/pull/20540",
"diff_url": "https://github.com/huggingface/transformers/pull/20540.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20540.patch",
"merged_at": 1670437397000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20539
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20539/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20539/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20539/events
|
https://github.com/huggingface/transformers/issues/20539
| 1,471,822,128
|
I_kwDOCUB6oc5Xujkw
| 20,539
|
Support token suppression, forced tokens (besides eos and bos), and decoder prompting for flax generation
|
{
"login": "andyehrenberg",
"id": 32784181,
"node_id": "MDQ6VXNlcjMyNzg0MTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/32784181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyehrenberg",
"html_url": "https://github.com/andyehrenberg",
"followers_url": "https://api.github.com/users/andyehrenberg/followers",
"following_url": "https://api.github.com/users/andyehrenberg/following{/other_user}",
"gists_url": "https://api.github.com/users/andyehrenberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andyehrenberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyehrenberg/subscriptions",
"organizations_url": "https://api.github.com/users/andyehrenberg/orgs",
"repos_url": "https://api.github.com/users/andyehrenberg/repos",
"events_url": "https://api.github.com/users/andyehrenberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/andyehrenberg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sanchit-gandhi @patil-suraj @gante ",
"Did we decide to implement these features in the Flax Whisper PR in the end? cc @ArthurZucker",
"@sanchit-gandhi @ArthurZucker I just added these back into the Flax Whisper PR",
"Cool! Closing this issue in favour of the PR https://github.com/huggingface/transformers/pull/20479"
] | 1,669
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
### Feature request
Add logits processors for token suppression and forced tokens at specific indices.
Enable prompting the decoder of encoder-decoder models with decoder_input_ids.
### Motivation
Currently, the flax generation utilities do not support token suppression, forcing specific tokens to be decoded at specific response indices, nor prompting the decoder (helpful for models like Whisper that support decoder prompts - Flax Whisper is implemented in #20479). Adding these would move the flax utilities closer to feature parity with the pytorch generation utilities. Adding these features would fully unlock a flax implementation of Whisper inference.
### Your contribution
I already have these features implemented in a branch of my fork - happy to open a PR!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20539/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20538
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20538/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20538/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20538/events
|
https://github.com/huggingface/transformers/pull/20538
| 1,471,778,716
|
PR_kwDOCUB6oc5EFaNl
| 20,538
|
cross platform from_pretrained
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Works great for sharded `pytorch` since a utility was already implemented. Though we are not gonna push for `Flax`, would still help to have the support already! \r\n```python \r\nfrom transformers import TFT5ForConditionalGeneration\r\nMODEL_NAME = \"google/flan-t5-xl\"\r\nm = TFT5ForConditionalGeneration.from_pretrained(MODEL_NAME, from_pt=True)\r\n```\r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"Just need to remove the `# TODOs` \r\n"
] | 1,669
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Allows loading sharded checkpoints in TF models. Should fix #19965
- [x] `from_pt=True`
- [ ] `from_flax=True`
cc @sgugger just FYI
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20538/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20538",
"html_url": "https://github.com/huggingface/transformers/pull/20538",
"diff_url": "https://github.com/huggingface/transformers/pull/20538.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20538.patch",
"merged_at": 1670255777000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20537
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20537/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20537/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20537/events
|
https://github.com/huggingface/transformers/pull/20537
| 1,471,708,099
|
PR_kwDOCUB6oc5EFK6q
| 20,537
|
Update some GH action versions
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
(I am running part of the CI to make sure nothing is broken by this PR)
We get a lot of warnings on CI summary page
```bash
Node.js 12 actions are deprecated. For more information see: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/. Please update the following actions to use Node.js 16: ...
```
This PR tries to update some of them. The remaining ones include `set-output` command and another one - I will work on that in another PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20537/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20537",
"html_url": "https://github.com/huggingface/transformers/pull/20537",
"diff_url": "https://github.com/huggingface/transformers/pull/20537.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20537.patch",
"merged_at": 1670342080000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20536
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20536/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20536/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20536/events
|
https://github.com/huggingface/transformers/pull/20536
| 1,471,682,279
|
PR_kwDOCUB6oc5EFFWM
| 20,536
|
[Vision] `.to` function for ImageProcessors
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Here is a quick v1, but I am afraid it's a bit too much in the sense that I am literally testing every possible combination \r\nAlso regarding tests, we can remove them or put them as slow. I checked with `deit`, `vit` & `vilt` (for multimodal setup) and the tests are green (the failing test for LayoutLM can be easily fixed)\r\nMay I ask you to have a quick look @sgugger @ydshieh ? Thanks!",
"You're looking at something too complicated: `to()` does all that work for you already. You can pass it a string, a device or a dtype.",
"Yes I was thinking of something very complicated where someone could set `.to(device, dtype)` let's maybe keep it even simpler and force the user to put only a single argument in `.to` ?\r\n\r\nEDIT: it seems that there is a workaround for that",
"Thanks everyone for the feedback! Let me know if you think it's relevant to add the `test_cast_dtype` for all ImageProcessors as it may slow down our CI testing suite",
"Ahaha no worries! thanks for all the iterations πͺ "
] | 1,669
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
PoC for adding `.to` support on ImageProcessors
related #20453
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20536/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20536",
"html_url": "https://github.com/huggingface/transformers/pull/20536",
"diff_url": "https://github.com/huggingface/transformers/pull/20536.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20536.patch",
"merged_at": 1670263854000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20535
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20535/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20535/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20535/events
|
https://github.com/huggingface/transformers/pull/20535
| 1,471,636,431
|
PR_kwDOCUB6oc5EE7SV
| 20,535
|
Add ESM contact prediction
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger All suggestions included!"
] | 1,669
| 1,669
| 1,669
|
MEMBER
| null |
This PR adds the `ContactPredictionHead` for ESM (both PT and TF). I also need to update some weights on our uploaded models to support this!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20535/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20535",
"html_url": "https://github.com/huggingface/transformers/pull/20535",
"diff_url": "https://github.com/huggingface/transformers/pull/20535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20535.patch",
"merged_at": 1669989810000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20534
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20534/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20534/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20534/events
|
https://github.com/huggingface/transformers/pull/20534
| 1,471,444,748
|
PR_kwDOCUB6oc5EERj2
| 20,534
|
[ResNet] Fix doctest
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the failing doctest for `ResNetBackbone`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20534/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20534",
"html_url": "https://github.com/huggingface/transformers/pull/20534",
"diff_url": "https://github.com/huggingface/transformers/pull/20534.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20534.patch",
"merged_at": 1669915177000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20533
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20533/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20533/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20533/events
|
https://github.com/huggingface/transformers/issues/20533
| 1,471,389,644
|
I_kwDOCUB6oc5Xs5_M
| 20,533
|
Transformer XL training fails because of IndexError due to change in ModuleList for torch>1.11
|
{
"login": "krishnanNuance",
"id": 73995669,
"node_id": "MDQ6VXNlcjczOTk1NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/73995669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishnanNuance",
"html_url": "https://github.com/krishnanNuance",
"followers_url": "https://api.github.com/users/krishnanNuance/followers",
"following_url": "https://api.github.com/users/krishnanNuance/following{/other_user}",
"gists_url": "https://api.github.com/users/krishnanNuance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishnanNuance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishnanNuance/subscriptions",
"organizations_url": "https://api.github.com/users/krishnanNuance/orgs",
"repos_url": "https://api.github.com/users/krishnanNuance/repos",
"events_url": "https://api.github.com/users/krishnanNuance/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishnanNuance/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting but could you give us a short reproducer as our CI didn't catch any regression here?",
"> Thanks for reporting but could you give us a short reproducer as our CI didn't catch any regression here?\r\n\r\nI run it as a part of fairseq. This test case-https://github.com/facebookresearch/fairseq/blob/main/tests/test_binaries.py#L1319 also fails due to same reason. IIUC, in the fairseq case d_embed=d_model maybe this condition is required to reproduce the issue?",
"That's not exactly a small reproducer we can run on our side ;-)",
"Can you point me to the test case that tests the training of the transformer XL model in huggingface? Maybe I can tune the parameters accordingly to reproduce the issue",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"actually this is still a problem. Can you please try by setting the params d_embed and d_model iwith same value? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,669
| 1,675
| 1,675
|
NONE
| null |
### System Info
Transformer version- 4.24
Torch version> 1.11
Stacktrace:
```
venv/lib/python3.8/site-packages/transformers/models/transfo_xl/modeling_transfo_xl.py:1115: in forward
softmax_output = self.crit(pred_hid, labels)
venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1190: in _call_impl
return forward_call(*input, **kwargs)
venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1178: in _slow_forward
result = self.forward(*input, **kwargs)
venv/lib/python3.8/site-packages/transformers/models/transfo_xl/modeling_transfo_xl_utilities.py:134: in forward
head_weight, head_bias, head_proj = weights[0], biases[0], self.out_projs[0]
venv/lib/python3.8/site-packages/torch/nn/modules/container.py:282: in __getitem__
return self._modules[self._get_abs_string_index(idx)]
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = ModuleList(), idx = 0
def _get_abs_string_index(self, idx):
"""Get the absolute index for the list of modules"""
idx = operator.index(idx)
if not (-len(self) <= idx < len(self)):
> raise IndexError('index {} is out of range'.format(idx))
E IndexError: index 0 is out of range
venv/lib/python3.8/site-packages/torch/nn/modules/container.py:272: IndexError
```
Please do let me know if further info is required.
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use generic torch src_token as input with d_model=d_embed with torch>1.11
### Expected behavior
Should work with different torch versions
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20533/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20532
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20532/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20532/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20532/events
|
https://github.com/huggingface/transformers/issues/20532
| 1,471,240,342
|
I_kwDOCUB6oc5XsViW
| 20,532
|
Add run_gsg.py and run_gsg_no_trainer.py pre-training scripts to examples
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"We try to avoid having examples that are too specific in the maintained examples, as we don't have the bandwidth for too many of them. How about you host it in a repo of yourself and then link to it from the model pages in our doc as well as the community page?",
"Ah I see, not problem that seems like a good alternative. Where would be the best place for asking for help with road blocks if I stumble across any?",
"You can use this issue or the [forums](https://discuss.huggingface.co/) :-)",
"Thank you @sgugger. I've had to de-prioritise this for due to funding constraints that will delay when we can train a bigger version of LongT5 from scratch so I'll close this for now and if we pick this back up I'll post any qs in the forums."
] | 1,669
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
### Feature request
Models such as Pegasus and LongT5 have been pretrained using Gap Sentences Generation (GSG) strategy rather the typical Masked Language Modelling (MLM).
This pre-training strategy leads to improved performance in certain language tasks such as [summarisation](https://arxiv.org/pdf/1912.08777.pdf). This request is to add run_gsg_.py and run_gsg_no_trainer.py files in the examples folder that would enable pre-training using the GSG strategy instead of or on top of MLM.
### Motivation
This will enable users to pre-train Pegasus or LongT5 models from scratch or to continue pre-training existing checkpoints on new datasets.
### Your contribution
I've started thinking about how to build this and am happy to contribute a PR if the HF team think this is valuable and can offer advice on the best ways to approach this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20532/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20531
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20531/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20531/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20531/events
|
https://github.com/huggingface/transformers/pull/20531
| 1,471,154,961
|
PR_kwDOCUB6oc5EDSDR
| 20,531
|
Fix `ConditionalDetrForSegmentation` doc example
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
Need this change after PR #20160. This was done for `DetrForSegmentation`, but we missed it for `ConditionalDetrForSegmentation`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20531/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20531",
"html_url": "https://github.com/huggingface/transformers/pull/20531",
"diff_url": "https://github.com/huggingface/transformers/pull/20531.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20531.patch",
"merged_at": 1669909800000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20530
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20530/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20530/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20530/events
|
https://github.com/huggingface/transformers/pull/20530
| 1,471,100,832
|
PR_kwDOCUB6oc5EDGJE
| 20,530
|
Doc-generate
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,669
| 1,674
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
Adds documentation for the `generate`function. It superseeds #17873 opened previously.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20530/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20530",
"html_url": "https://github.com/huggingface/transformers/pull/20530",
"diff_url": "https://github.com/huggingface/transformers/pull/20530.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20530.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20529
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20529/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20529/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20529/events
|
https://github.com/huggingface/transformers/pull/20529
| 1,471,065,575
|
PR_kwDOCUB6oc5EC-mh
| 20,529
|
Change transformers.onnx to use optimum.exporters.onnx
|
{
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,670
| 1,670
|
MEMBER
| null |
# What does this PR do?
As the title say. The `transformers.onnx` command-line tool now uses the `optimum.exporters.onnx` command-line tool in the background, and redirects the user to use this tool directly for the next times (same in the documentation).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20529/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/20529/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20529",
"html_url": "https://github.com/huggingface/transformers/pull/20529",
"diff_url": "https://github.com/huggingface/transformers/pull/20529.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20529.patch",
"merged_at": 1670578922000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20528
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20528/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20528/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20528/events
|
https://github.com/huggingface/transformers/pull/20528
| 1,471,052,891
|
PR_kwDOCUB6oc5EC70z
| 20,528
|
Update `ZeroShotObjectDetectionPipeline` doc example
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
As @amyeroberts mentioned in #20160, there is some tiny difference after that PR, and we need this update to pass the doctests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20528/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20528",
"html_url": "https://github.com/huggingface/transformers/pull/20528",
"diff_url": "https://github.com/huggingface/transformers/pull/20528.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20528.patch",
"merged_at": 1669910004000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20527
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20527/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20527/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20527/events
|
https://github.com/huggingface/transformers/pull/20527
| 1,471,038,144
|
PR_kwDOCUB6oc5EC4nl
| 20,527
|
fix plbart doctest
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh @sgugger I just want to know how this PR works and why was the doctests failing earlier? Thanks in advance!",
"As the PR description mentioned, PR #19980 changed `PLBartTokenizer`, and some expected outputs in the tests have to be updated.",
"@ydshieh got it. Thanks! "
] | 1,669
| 1,670
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
We need to update the expect output in doc example after PR #19980
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20527/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20527",
"html_url": "https://github.com/huggingface/transformers/pull/20527",
"diff_url": "https://github.com/huggingface/transformers/pull/20527.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20527.patch",
"merged_at": 1669909745000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20526
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20526/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20526/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20526/events
|
https://github.com/huggingface/transformers/issues/20526
| 1,471,023,658
|
I_kwDOCUB6oc5Xrgoq
| 20,526
|
Crash on google colab
|
{
"login": "GoldDRoge",
"id": 109260895,
"node_id": "U_kgDOBoMwXw",
"avatar_url": "https://avatars.githubusercontent.com/u/109260895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GoldDRoge",
"html_url": "https://github.com/GoldDRoge",
"followers_url": "https://api.github.com/users/GoldDRoge/followers",
"following_url": "https://api.github.com/users/GoldDRoge/following{/other_user}",
"gists_url": "https://api.github.com/users/GoldDRoge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GoldDRoge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GoldDRoge/subscriptions",
"organizations_url": "https://api.github.com/users/GoldDRoge/orgs",
"repos_url": "https://api.github.com/users/GoldDRoge/repos",
"events_url": "https://api.github.com/users/GoldDRoge/events{/privacy}",
"received_events_url": "https://api.github.com/users/GoldDRoge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @GoldDRoge! So the issue lies with the `processor.feature_extractor` call method?\r\n\r\nCould you provide a Google Colab link / reproducible code snippet I can run to get this error?\r\n\r\nLooks like you're using local audio data. For the shared Colab link / reproducible code snippet, you can use this audio sample:\r\n\r\n```python\r\n!pip install datasets\r\n\r\nfrom datasets import load_dataset\r\n\r\nlibrispeech_dummy = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\n\r\nsample = librispeech_dummy[0][\"audio\"]\r\naudio = sample[\"array\"]\r\nsampling_rate = sample[\"sampling_rate]\r\n```",
"Thanks for quickly response here the link https://colab.research.google.com/drive/1UdedI76aBEMCqlLcj1uakIdRAoztrmg5?usp=sharing \r\nok let me try. thanks for your help\r\n@sanchit-gandhi ",
"i have try like u suggest but it still crash when ever i run \r\ninput_data = processor.feature_extractor(audio[0], sampling_rate=16000)\r\nhmmm i really dont know what error is that. \r\n@sanchit-gandhi ",
"Hey @GoldDRoge! Sorry for the late reply! I was able to reproduce the error with your Google Colab. However, installing the latest version of transformers and pyctcdecode remedies the issue for me: https://colab.research.google.com/drive/1Za4340oWO5GMLlKvgEtvFO8vWVS4Fafy?usp=sharing\r\n\r\nCould you try pip installing the latest version of transformers and pyctcdecode as highlighted? Let me know if the issue still persists!\r\n\r\nThere is a 'warning' that is presented when using your Wav2Vec2ProcessorWithLM that is **not** present with the 'official' processor from the [blog post](https://huggingface.co/blog/wav2vec2-with-ngram#1-decoding-audio-data-with-wav2vec2-and-a-language-model):\r\n```\r\nWARNING:pyctcdecode.language_model:Only 0 unigrams passed as vocabulary. Is this small or artificial data?\r\n```\r\n\r\nCould you double check that your KenLM is built correctly? It's quite strange behaviour for the `unigrams.txt` file to be empty in the KenLM! This means that only sub-word tokens form your LM. https://huggingface.co/nguyenvulebinh/wav2vec2-large-vi-vlsp2020/tree/main/language_model",
"Hey @GoldDRoge! Did updating to the latest version of transformers and pyctcdecode help with the issue?We should definitely verify that our KenLM is built correctly and is returning a non-zero list of unigrams! Let me know if you're encountering any problems running the updated code snippet, more than happy to help here! π€",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,669
| 1,676
| 1,676
|
NONE
| null |
### System Info
google colab
transformers==4.20.0
https://github.com/kpu/kenlm/archive/master.zip
pyctcdecode==0.4.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers.file_utils import cached_path, hf_bucket_url
from importlib.machinery import SourceFileLoader
from transformers import Wav2Vec2ProcessorWithLM
from IPython.lib.display import Audio
import torchaudio
import torch
# Load model & processor
model_name = "nguyenvulebinh/wav2vec2-large-vi-vlsp2020"
model = SourceFileLoader("model", cached_path(hf_bucket_url(model_name,filename="model_handling.py"))).load_module().Wav2Vec2ForCTC.from_pretrained(model_name)
processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name)
# Load an example audio (16k)
audio, sample_rate = torchaudio.load(cached_path(hf_bucket_url(model_name, filename="t2_0000006682.wav")))
input_data = processor.feature_extractor(audio[0], sampling_rate=16000)
# Infer
output = model(**input_data)
# Output transcript without LM
print(processor.tokenizer.decode(output.logits.argmax(dim=-1)[0].detach().cpu().numpy()))
# Output transcript with LM
print(processor.decode(output.logits.cpu().detach().numpy()[0], beam_width=100).text)
```
### Expected behavior
When ever i run this code
input_data = processor.feature_extractor(audio[0], sampling_rate=16000)
google colab restart for unknown reason. I really dont know is that a conflict by cpu and gpu???
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20526/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20525
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20525/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20525/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20525/events
|
https://github.com/huggingface/transformers/pull/20525
| 1,471,015,452
|
PR_kwDOCUB6oc5ECzht
| 20,525
|
[BT] add links to `optimum` docs
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds link to `BetterTransformer` documentation on `transformers` documentation
cc @ydshieh @michaelbenayoun @fxmarty
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20525/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20525/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20525",
"html_url": "https://github.com/huggingface/transformers/pull/20525",
"diff_url": "https://github.com/huggingface/transformers/pull/20525.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20525.patch",
"merged_at": 1669909933000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20524
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20524/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20524/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20524/events
|
https://github.com/huggingface/transformers/pull/20524
| 1,470,986,136
|
PR_kwDOCUB6oc5ECtOt
| 20,524
|
added docs to time series transformer's generate function
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20524). All of your documentation changes will be reflected on that endpoint."
] | 1,669
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Added docs to the time series transformer's generate function.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20524/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20524",
"html_url": "https://github.com/huggingface/transformers/pull/20524",
"diff_url": "https://github.com/huggingface/transformers/pull/20524.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20524.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20523
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20523/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20523/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20523/events
|
https://github.com/huggingface/transformers/pull/20523
| 1,470,973,621
|
PR_kwDOCUB6oc5ECqkH
| 20,523
|
Change doctests ci launch time
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
Current doctests CI is launched at 0h (GTM+0), but the docker images are built at 1h (GTM+0), while modeling CI is at 2h (GTM+0).
It happens a few times that we change something in doctest docker image workflow file, expect the failed tests will pass in the next run, and turns out it is not - as the next run is launched 1 hour before the new image is built.
To avoid confusion, this PR **change the doctest launch time to be the same as the modeling CI time - which is after the docker image build CI**.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20523/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20523",
"html_url": "https://github.com/huggingface/transformers/pull/20523",
"diff_url": "https://github.com/huggingface/transformers/pull/20523.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20523.patch",
"merged_at": 1669909122000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20522
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20522/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20522/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20522/events
|
https://github.com/huggingface/transformers/pull/20522
| 1,470,963,814
|
PR_kwDOCUB6oc5ECod4
| 20,522
|
QnA example: add speed metric
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
Examples:
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20522/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20522",
"html_url": "https://github.com/huggingface/transformers/pull/20522",
"diff_url": "https://github.com/huggingface/transformers/pull/20522.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20522.patch",
"merged_at": 1669914259000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20521
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20521/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20521/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20521/events
|
https://github.com/huggingface/transformers/pull/20521
| 1,470,898,337
|
PR_kwDOCUB6oc5ECaPW
| 20,521
|
Fix OwlViTFeatureExtractor.post_process_image_guided_detection device incompatibility issue
|
{
"login": "fcakyon",
"id": 34196005,
"node_id": "MDQ6VXNlcjM0MTk2MDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fcakyon",
"html_url": "https://github.com/fcakyon",
"followers_url": "https://api.github.com/users/fcakyon/followers",
"following_url": "https://api.github.com/users/fcakyon/following{/other_user}",
"gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions",
"organizations_url": "https://api.github.com/users/fcakyon/orgs",
"repos_url": "https://api.github.com/users/fcakyon/repos",
"events_url": "https://api.github.com/users/fcakyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/fcakyon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for the fast response!"
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
fixes https://github.com/huggingface/transformers/issues/20513
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20521/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20521/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20521",
"html_url": "https://github.com/huggingface/transformers/pull/20521",
"diff_url": "https://github.com/huggingface/transformers/pull/20521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20521.patch",
"merged_at": 1669914197000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20520
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20520/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20520/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20520/events
|
https://github.com/huggingface/transformers/pull/20520
| 1,470,888,355
|
PR_kwDOCUB6oc5ECYIl
| 20,520
|
Add RemBERT ONNX config
|
{
"login": "hchings",
"id": 14718778,
"node_id": "MDQ6VXNlcjE0NzE4Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/14718778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hchings",
"html_url": "https://github.com/hchings",
"followers_url": "https://api.github.com/users/hchings/followers",
"following_url": "https://api.github.com/users/hchings/following{/other_user}",
"gists_url": "https://api.github.com/users/hchings/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hchings/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hchings/subscriptions",
"organizations_url": "https://api.github.com/users/hchings/orgs",
"repos_url": "https://api.github.com/users/hchings/repos",
"events_url": "https://api.github.com/users/hchings/events{/privacy}",
"received_events_url": "https://api.github.com/users/hchings/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Hi @hchings, the PR looks excellent! Did you try to run tests locally?\r\n> \r\n> ```\r\n> RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k \"rembert\"\r\n> ```\r\n> \r\n> Could you also remove the `Fixes #...` before the link to the ONNX issue to avoid an auto-close from GitHub? Thanks a lot for your contribution!\r\n\r\nYes, all slow tests passed for PyTorch locally. Should we add TensorFlow tests as well? My understanding is TF tests are needed only when TF has parity with PyTorch implementations. But correct me if I'm wrong. \r\n"
] | 1,669
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Add RemBERT ONNX config (part of https://github.com/huggingface/transformers/issues/16308)
The max absolute difference between reference model and ONNX exported model is around `2e-05` in testings. I learned from other PRs that this discrepancy is within an acceptable range so I loosen the default atol.
Slow tests are passed (`RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "rembert"`).
I'm new to contributing to Transformers. If anyone can help me understand what is lacking, it would be appreciated!
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@lewtun & @ChainYo for ONNX and @Iwontbecreative for RemBERT.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20520/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20520/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20520",
"html_url": "https://github.com/huggingface/transformers/pull/20520",
"diff_url": "https://github.com/huggingface/transformers/pull/20520.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20520.patch",
"merged_at": 1670258349000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20519
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20519/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20519/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20519/events
|
https://github.com/huggingface/transformers/issues/20519
| 1,470,771,143
|
I_kwDOCUB6oc5Xqi_H
| 20,519
|
'WhisperTokenizer' object has no attribute 'set_prefix_tokens'
|
{
"login": "nethermanpro",
"id": 75082385,
"node_id": "MDQ6VXNlcjc1MDgyMzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/75082385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nethermanpro",
"html_url": "https://github.com/nethermanpro",
"followers_url": "https://api.github.com/users/nethermanpro/followers",
"following_url": "https://api.github.com/users/nethermanpro/following{/other_user}",
"gists_url": "https://api.github.com/users/nethermanpro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nethermanpro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nethermanpro/subscriptions",
"organizations_url": "https://api.github.com/users/nethermanpro/orgs",
"repos_url": "https://api.github.com/users/nethermanpro/repos",
"events_url": "https://api.github.com/users/nethermanpro/events{/privacy}",
"received_events_url": "https://api.github.com/users/nethermanpro/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @nethermanpro, I'm not a maintainer, but I think I know what is going on. \r\n\r\nIt seems that in `transformers 4.24.0`, the method `set_prefix_tokens` is not present in this version. You can find it in this repository in the main branch, if you want to use it, you will need to install transformers directly from this repository `pip install git+https://github.com/huggingface/transformers.git` or wait for the next stable release. \r\n\r\nThe documentation you are looking at seems to be https://huggingface.co/docs/transformers/main/en/model_doc/whisper which is the documentation of the main branch, to check the documentation of `4.24.0` you can \r\nselect it at the top left dropdown where it says `main`. \r\n\r\nHere you have the link https://huggingface.co/docs/transformers/v4.24.0/en/model_doc/whisper#transformers.WhisperTokenizer\r\n\r\n",
"Thanks, I think that solves my problem."
] | 1,669
| 1,669
| 1,669
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.27
- Python version: 3.9.15
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@LysandreJik
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-medium", language="spanish", cache_dir="./pretrained_models")
tokenizer.set_prefix_tokens(language="english")
AttributeError: 'WhisperTokenizer' object has no attribute 'set_prefix_tokens'
### Expected behavior

This method should exist according to the documentation
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20519/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20518
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20518/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20518/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20518/events
|
https://github.com/huggingface/transformers/pull/20518
| 1,470,710,489
|
PR_kwDOCUB6oc5EBxpf
| 20,518
|
[WIP] Add Atlas - Retrieval Augmented Language Model
|
{
"login": "ae99",
"id": 30190922,
"node_id": "MDQ6VXNlcjMwMTkwOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/30190922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ae99",
"html_url": "https://github.com/ae99",
"followers_url": "https://api.github.com/users/ae99/followers",
"following_url": "https://api.github.com/users/ae99/following{/other_user}",
"gists_url": "https://api.github.com/users/ae99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ae99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ae99/subscriptions",
"organizations_url": "https://api.github.com/users/ae99/orgs",
"repos_url": "https://api.github.com/users/ae99/repos",
"events_url": "https://api.github.com/users/ae99/events{/privacy}",
"received_events_url": "https://api.github.com/users/ae99/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This branch is very much a WIP currently, but for anyone interested here is roughly how I plan to structure things, aiming to roughly mesh the shape of the original implementation with Transformer's existing patterns. For the most part, I hope to make its usage as similar as possible to `T5ForConditionalGeneration`.\r\n\r\nThis is all new to me, so any feedback would be super helpful!\r\n\r\n```python\r\nclass AtlasConfig():\r\n pass\r\n\r\nclass AtlasTrainer(Trainer):\r\n pass\r\n\r\nclass AtlasPreTrainedModel(PreTrainedModel):\r\n pass\r\n\r\nclass AtlasModel(AtlasPreTrainedModel):\r\n def __init__(self, queryPassageEncoder, reader, retriever):\r\n self.queryPassageEncoder = queryPassageEncoder # UntiedDualEncoder\r\n self.reader = reader # FiD\r\n self.retriever = retriever # HFIndexBase\r\n\r\nclass FiD(T5ForConditionalGeneration):\r\n def __init__(self):\r\n self.encoder = FiDStack()\r\n self.decoder = FiDStack()\r\n\r\nclass FiDStack(T5Stack):\r\n pass\r\n\r\nclass UntiedDualEncoder(torch.nn.Module):\r\n def __init__(self, query_contriever, passage_contriever):\r\n self.query_contriever = query_contriever\r\n self.passage_contriever = passage_contriever\r\n\r\nclass Contriever(BertModel):\r\n pass\r\n\r\nclass HFIndexBase():\r\n pass\r\n\r\nclass AtlasRetriever:\r\n def __init__(self, index):\r\n self.index = index # HFIndexBase\r\n```\r\n\r\n---\r\n\r\nThe existing RAG implementation makes its sub-models easily swappable, however, the inputs and outputs expected by 'reader' model (the name given to the T5 encoder/decoder in the original implementation) here are non-standard due to the fusion-in-decoder mechanism, so I don't plan to make these models as easily swappable as I think that would complicate things unnecessarily.\r\n\r\nAs I'm not doing this, it seems it may be best practice to copy implementation (w/ \"Copied from\" comments) of models like the BertModel and T5ForConditionalGeneration rather than import - if that's the case I'll switch these across once the PR's almost ready.\r\n\r\n---\r\n\r\nThere is some complexity here in how we make the model trainable E2E within Huggingface's patterns, which I haven't yet looked into deeply. I wonder whether a `class AtlasTrainer(Trainer)` would make sense, which can implement the various continuous re-indexing strategies described in the original paper.\r\n\r\n\r\n\r\n",
"Yes, please do use the approach of copying model code and adding `# Copied from` comments as it's more inline with the general approach in the library (RAG being an exception :-) )",
"cc @ArthurZucker ",
"@ArthurZucker @ae99 let me know if you need help with anything - think this is a super cool addition! ",
"> @ArthurZucker @ae99 let me know if you need help with anything - think this is a super cool addition!\r\n\r\nHey @patrickvonplaten and @ArthurZucker! I think the general structure of this model is mostly in place. I'd love to get an early review on the PR from you just to check if things are looking ok and confirm the major things are roughly fitting patterns correctly.\r\n\r\nI have a few temporary notebooks `save_pretrained.ipynb`, `test_retriever.ipynb` and `test_model.ipynb` in place of actual tests at the moment if you would like to get a sense of usage. Like RAG I have a dedicated retriever, but I've cut this down to mostly be a small wrapper around a dataset+index for now. Documentation and tests haven't been touched at all yet, and everything is very WIP still!",
"Hi @ae99, I would also like to contribute. Let me know if there is something I can help you with.\r\n",
"@ArthurZucker could you maybe take a look here? :-) Let me know if you need some help",
"@akashe feel free to give this PR a review as well if you'd like to help a bit :-) ",
"Will review now π ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @ae99 , are you still working on the integration? If not then let me know, I would be happy to continue from where you left.",
"> Hey @ae99 , are you still working on the integration? If not then let me know, I would be happy to continue from where you left.\r\n\r\nHey @akashe, that'd be perfect.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi!\r\nThis one would be really relevant for something that we are working on in my org.\r\nWhat's the status on it? We may be able to chip in.",
"Hey! We have not really picked it up, if the community needs it we can probably come back to it, but I would advise to just put the model on the hub following this [tutorial](https://huggingface.co/docs/transformers/custom_models)! π€ "
] | 1,669
| 1,687
| 1,680
|
NONE
| null |
# What does this PR do?
Implements Atlas: Few-shot Learning with Retrieval Augmented Language Model as mentioned here
https://github.com/huggingface/transformers/issues/20503
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten, @lhoestq, @patil-suraj
cc @patrick-s-h-lewis and @gizacard
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20518/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20518/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20518",
"html_url": "https://github.com/huggingface/transformers/pull/20518",
"diff_url": "https://github.com/huggingface/transformers/pull/20518.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20518.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20517
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20517/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20517/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20517/events
|
https://github.com/huggingface/transformers/pull/20517
| 1,470,483,696
|
PR_kwDOCUB6oc5EBAfy
| 20,517
|
Fix link in pipeline device map
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
MEMBER
| null |
This PR fixes the broken link in the pipeline `device_map` parameter.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20517/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20517",
"html_url": "https://github.com/huggingface/transformers/pull/20517",
"diff_url": "https://github.com/huggingface/transformers/pull/20517.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20517.patch",
"merged_at": 1669917525000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20516
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20516/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20516/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20516/events
|
https://github.com/huggingface/transformers/pull/20516
| 1,470,439,383
|
PR_kwDOCUB6oc5EA2Mf
| 20,516
|
Fix Hubert models in TFHubertModel and TFHubertForCTC documentation code
|
{
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR updates the used models in the `TFHubertModel` and `TFHubertModelForCTC` example codes to the same model used in `HubertModel` and `HubertModelForCTC` other examples in the same documentation as `"facebook/hubert-base-960h"` does not exist and the actual code doesn't run.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20516/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20516",
"html_url": "https://github.com/huggingface/transformers/pull/20516",
"diff_url": "https://github.com/huggingface/transformers/pull/20516.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20516.patch",
"merged_at": 1669915343000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20515
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20515/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20515/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20515/events
|
https://github.com/huggingface/transformers/pull/20515
| 1,470,176,815
|
PR_kwDOCUB6oc5D_8fT
| 20,515
|
Add some warning for Dynamo and enable TF32 when it's set
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
This PR adds a warning when a user sets torchdynamo without an Ampere GPU (or higher) and also enables TF32 unless the user explicitly asked not it with `--no_tf32` to get the best performance.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20515/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20515",
"html_url": "https://github.com/huggingface/transformers/pull/20515",
"diff_url": "https://github.com/huggingface/transformers/pull/20515.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20515.patch",
"merged_at": 1669840937000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20514
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20514/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20514/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20514/events
|
https://github.com/huggingface/transformers/issues/20514
| 1,470,065,134
|
I_kwDOCUB6oc5Xn2nu
| 20,514
|
Why tflite model output shape is different than the original model converted from T5ForConditionalGeneration?
|
{
"login": "generic-matrix",
"id": 15347450,
"node_id": "MDQ6VXNlcjE1MzQ3NDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/15347450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/generic-matrix",
"html_url": "https://github.com/generic-matrix",
"followers_url": "https://api.github.com/users/generic-matrix/followers",
"following_url": "https://api.github.com/users/generic-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/generic-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/generic-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/generic-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/generic-matrix/orgs",
"repos_url": "https://api.github.com/users/generic-matrix/repos",
"events_url": "https://api.github.com/users/generic-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/generic-matrix/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You reloaded your model in a `TFT5Model`, which is not the same as `T5ForConditionalGeneration`: it's the base model without the decoder.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hello @sgugger ,\r\n\r\nThank you for the update.\r\n\r\nIs there any way to convert T5ForConditionalGeneration to TFlite model taking the docs below into consideration ?\r\n\r\nhttps://www.tensorflow.org/api_docs/python/tf/lite/TFLiteConverter",
"cc @gante @Rocketknight1 the TF experts might be able to help here!",
"Hey @generic-matrix π you probably want to export the entire generation function (which wraps the model), not just the model itself. Look at this [test example](https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/tests/generation/test_tf_utils.py#L140) :)",
"@generic-matrix please see the sample [notebook](https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/generate_tflite_from_whisper.ipynb) converting from TFWhisperForConditionalGeneration to tflite \r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,669
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patric @anton-l @sanchit-gandhi @Rocketknight1
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**T5ForConditionalGeneration Model to translate English to German**
```
from transformers import T5TokenizerFast, T5ForConditionalGeneration
tokenizer = T5TokenizerFast.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
input_ids = tokenizer("translate English to German: the flowers are wonderful.", return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Output : Die Blumen sind wunderbar.
**Input Shape**
```
input_ids.shape
```
Output : torch.Size([1, 11])
**Output Shape**
```
outputs.shape
```
Output : torch.Size([1, 7])
**Save Pretrained model**
```
!mkdir /content/test
model.save_pretrained('/content/test')
```
**Load TFT5Model model from pretrained**
```
from transformers import TFT5Model
t5model = TFT5Model.from_pretrained('/content/test',from_pt=True)
!mkdir /content/test/t5
t5model.save('/content/test/t5')
```
**Convert TFT5Model to TFlite**
```
import tensorflow as tf
saved_model_dir = '/content/test/t5'
!mkdir /content/test/tflite
tflite_model_path = '/content/test/tflite/model.tflite'
# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.experimental_new_converter = True
converter.experimental_new_quantizer = True
converter.experimental_new_dynamic_range_quantizer = True
converter.allow_custom_ops=True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
#print(tflite_model)
print(type(tflite_model))
# Save the model
with open(tflite_model_path, 'wb') as f:
f.write(tflite_model)
```
**Load The TFLite model**
```
import numpy as np
import tensorflow as tf
tflite_model_path = '/content/test/tflite/model.tflite'
# Load the TFLite model and allocate tensors
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.resize_tensor_input(0, [1,5], strict=True)
interpreter.resize_tensor_input(1, [1,5], strict=True)
interpreter.resize_tensor_input(2, [1,5], strict=True)
interpreter.resize_tensor_input(3, [1,5], strict=True)
interpreter.allocate_tensors()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
#print the output
input_data = np.array(np.random.random_sample((input_shape)), dtype=np.int64)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
```
**Get The Output Shape**
```
print(output_data.shape)
```
### Expected behavior
`print(output_data.shape)`
results in
**Output : (1, 8, 5, 64)
Expected something like : (1, 7)**
Can someone let me know where am I going wrong ?
The output shape of the tflite model is completely different from the T5ForConditionalGeneration model
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20514/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20513
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20513/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20513/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20513/events
|
https://github.com/huggingface/transformers/issues/20513
| 1,470,022,809
|
I_kwDOCUB6oc5XnsSZ
| 20,513
|
owlvit image guided detection does not work in gpu (cuda)
|
{
"login": "fcakyon",
"id": 34196005,
"node_id": "MDQ6VXNlcjM0MTk2MDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fcakyon",
"html_url": "https://github.com/fcakyon",
"followers_url": "https://api.github.com/users/fcakyon/followers",
"following_url": "https://api.github.com/users/fcakyon/following{/other_user}",
"gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions",
"organizations_url": "https://api.github.com/users/fcakyon/orgs",
"repos_url": "https://api.github.com/users/fcakyon/repos",
"events_url": "https://api.github.com/users/fcakyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/fcakyon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @fcakyon, thanks for bringing this up! You can expect a fix PR shortly.\r\n\r\nAs a side note, we open new issues and PRs for bugs to make it easier to track improvements. You can directly include your fix suggestions in the issue.",
"I will try to open a PR, give me few mins π ",
"@alaradirik tried to open the related pr here: https://github.com/huggingface/transformers/pull/20521"
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.25.0.dev0
- Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@alaradirik @NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run this demo in gpu: https://huggingface.co/spaces/adirik/image-guided-owlvit
Get this error:
```bash
File ".../lib/python3.8/site-packages/transformers/models/owlvit/image_processing_owlvit.py", line 420, in post_process_image_guided_detection
target_boxes = target_boxes * scale_fct[:, None, :]
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
### Expected behavior
I have posted a possible fix in this comment: https://github.com/huggingface/transformers/pull/20160#discussion_r1036281555
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20513/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20512
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20512/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20512/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20512/events
|
https://github.com/huggingface/transformers/pull/20512
| 1,469,958,913
|
PR_kwDOCUB6oc5D_NqT
| 20,512
|
Update expected output in `AutomaticSpeechRecognitionPipeline` doc example
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
The failed doc example uses `openai/whisper-base`. Maybe same reason in #20493, so I just updated the expected output.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20512/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20512/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20512",
"html_url": "https://github.com/huggingface/transformers/pull/20512",
"diff_url": "https://github.com/huggingface/transformers/pull/20512.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20512.patch",
"merged_at": 1669834099000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20511
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20511/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20511/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20511/events
|
https://github.com/huggingface/transformers/pull/20511
| 1,469,924,448
|
PR_kwDOCUB6oc5D_GP0
| 20,511
|
Add `natten` in docker file
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
So we can run the tests for `dinat` model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20511/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20511",
"html_url": "https://github.com/huggingface/transformers/pull/20511",
"diff_url": "https://github.com/huggingface/transformers/pull/20511.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20511.patch",
"merged_at": 1669834174000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20510
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20510/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20510/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20510/events
|
https://github.com/huggingface/transformers/pull/20510
| 1,469,894,601
|
PR_kwDOCUB6oc5D-_wW
| 20,510
|
Fix Data2VecTextForCasualLM example code documentation
|
{
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes documentation of Data2VecTextForCasualLM example code, as it is currently importing `Data2VecTextTokenizer`, which does not exist, and the tokenizer is actually `RobertaTokenizer`. At the same time, the model name `"data2vec-base"` does not exist, and it doesn't particularly say to create one locally (with the change, it aims to `"facebook/data2vec-text-base"`).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20510/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20510",
"html_url": "https://github.com/huggingface/transformers/pull/20510",
"diff_url": "https://github.com/huggingface/transformers/pull/20510.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20510.patch",
"merged_at": 1669838627000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20509
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20509/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20509/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20509/events
|
https://github.com/huggingface/transformers/pull/20509
| 1,469,819,729
|
PR_kwDOCUB6oc5D-vlE
| 20,509
|
Fix Typo in Docs for GPU
|
{
"login": "julianpollmann",
"id": 2836863,
"node_id": "MDQ6VXNlcjI4MzY4NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2836863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julianpollmann",
"html_url": "https://github.com/julianpollmann",
"followers_url": "https://api.github.com/users/julianpollmann/followers",
"following_url": "https://api.github.com/users/julianpollmann/following{/other_user}",
"gists_url": "https://api.github.com/users/julianpollmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julianpollmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julianpollmann/subscriptions",
"organizations_url": "https://api.github.com/users/julianpollmann/orgs",
"repos_url": "https://api.github.com/users/julianpollmann/repos",
"events_url": "https://api.github.com/users/julianpollmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/julianpollmann/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes Typo in Docs for multi gpu training (https://huggingface.co/docs/transformers/main/en/perf_train_gpu_many)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20509/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20509",
"html_url": "https://github.com/huggingface/transformers/pull/20509",
"diff_url": "https://github.com/huggingface/transformers/pull/20509.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20509.patch",
"merged_at": 1669822879000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20508
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20508/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20508/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20508/events
|
https://github.com/huggingface/transformers/issues/20508
| 1,469,760,668
|
I_kwDOCUB6oc5XmsSc
| 20,508
|
more_itertools required for Whisper normaliser
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Totally agree here! If we only use window and it is pretty short, makes sense to implement it! But IIRC it was a pretty long dependency. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @Narsil do you want to take this over? ",
"Done I think.",
"Thanks @Narsil!"
] | 1,669
| 1,676
| 1,673
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.25.0.dev0
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.8.9
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.5.1 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
cc @ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The English normaliser for Whisper relies on the `more_itertools` package.
It is imported here:
https://github.com/huggingface/transformers/blob/761b3fad922310457003af2fea6c447768676c8d/src/transformers/models/whisper/english_normalizer.py#L23-L24
And used here:
https://github.com/huggingface/transformers/blob/761b3fad922310457003af2fea6c447768676c8d/src/transformers/models/whisper/english_normalizer.py#L243
Since we import `more_itertools` under the if statement `if is_more_itertools_available()`, the normaliser **fails** if `more_itertools` is **not** installed.
```python
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny.en")
tokenizer._normalize("the cat")
```
```
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/whisper/tokenization_whisper.py", line 485, in _normalize
return normalizer(text)
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/whisper/english_normalizer.py", line 593, in __call__
s = self.standardize_numbers(s)
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/whisper/english_normalizer.py", line 497, in __call__
s = " ".join(word for word in self.process_words(s.split()) if word is not None)
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/whisper/english_normalizer.py", line 497, in
s = " ".join(word for word in self.process_words(s.split()) if word is not None)
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/whisper/english_normalizer.py", line 243, in process_words
for prev, current, next in windowed([None] + words + [None], 3):
NameError: name 'windowed' is not defined
```
IMO this is a pretty cryptic error message for the user. Perhaps we can add a warning that `more_itertools` is required for the normaliser? Even better, we implement the `windowed` function ourselves to prevent an extra library dependency that we use for one function.
### Expected behavior
Good: warning that `more_itertools` is not installed
Better: implement the `windowed` function ourselves
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20508/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20507
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20507/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20507/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20507/events
|
https://github.com/huggingface/transformers/pull/20507
| 1,469,668,726
|
PR_kwDOCUB6oc5D-Oy6
| 20,507
|
Fix TF nightly tests
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Don't worry, it was a really small fix! Just making sure you saw this so you didn't get confused about why your code was being changed.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
MEMBER
| null |
This PR fixes two issues with the TF tests:
1) `test_saved_model_creation` failed sometimes because the dict being passed to the saved model didn't match the inputs it was traced/compiled with. This should be fixed now.
2) Some of the tests for the new `TFGPT2Tokenizer` (cc @piEsposito) were using `is_tensorflow_text_available` or `requires_tensorflow_text`, but `TFGPT2Tokenizer` actually depends on `keras-nlp`. I made sure the requirements were changed and that `is_keras_nlp_available` is importable from the root.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20507/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20507/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20507",
"html_url": "https://github.com/huggingface/transformers/pull/20507",
"diff_url": "https://github.com/huggingface/transformers/pull/20507.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20507.patch",
"merged_at": 1669819674000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20506
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20506/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20506/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20506/events
|
https://github.com/huggingface/transformers/pull/20506
| 1,469,436,328
|
PR_kwDOCUB6oc5D9cRr
| 20,506
|
[modelcard] Update dataset tags
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Here's an example @lewtun \r\n\r\nBefore: https://huggingface.co/sanchit-gandhi/whisper-debug/blob/88128010f73114cc2274868938ccbf6c373b15c5/README.md#L11-L20\r\n\r\nAfter: https://huggingface.co/sanchit-gandhi/whisper-debug/blob/3be9573cff0eb5af8877189481fd13d411171a86/README.md#L9-L20 (only used 8 samples for eval)",
"Will merge if you're happy with the changes @lewtun?",
"(merging to unblock testing for the Whisper fine-tuning event)"
] | 1,669
| 1,687
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
Currently, the `model-index` portion of the model cards generated by Trainer reference the train dataset and omit the dataset split and config. This PR:
1. Uses the **eval dataset** to build the yaml data for the model card rather than the **train dataset** by default. Why? Because the yaml data is built on a trio of information of {task, dataset, metrics} (_c.f._ [modelcard.py#L446](https://github.com/huggingface/transformers/blob/d0c1ded5f36e27cd74728c0127add5afdf1f2afa/src/transformers/modelcard.py#L446)). Here, metrics is referring to the **eval dataset** metrics, so we should build the metadata information with the eval dataset name, config, split, etc. If the eval_dataset is None, we revert to the train_dataset.
2. Checks if `dataset_metadata` is None. If so, builds from the `one_dataset`.
The combined changes of 1 and 2 mean that model cards generated by Trainer will be compatible with the autoevaluate leaderboards! https://huggingface.co/spaces/autoevaluate/leaderboards
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20506/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20506/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20506",
"html_url": "https://github.com/huggingface/transformers/pull/20506",
"diff_url": "https://github.com/huggingface/transformers/pull/20506.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20506.patch",
"merged_at": 1669891938000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20505
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20505/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20505/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20505/events
|
https://github.com/huggingface/transformers/issues/20505
| 1,469,170,208
|
I_kwDOCUB6oc5XkcIg
| 20,505
|
layerdrop in Wav2Vec2Adapter
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sanchit-gandhi could you take a look here? ",
"Hey @bofenghuang! Thanks for opening this issue. \r\n\r\nFor context, we use the adapter layer when combining the Wav2Vec2 model in a sequence-to-sequence combination (see https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#warm-started-speech-encoder-decoder-model). Here, the purpose of the adapter layer is to better match the time scale of the encoder with that of the decoder (see aforementioned doc).\r\n\r\nIn this respect, it's fine if the CNN downsamples the Wav2Vec2 output sequence at a stochastic rate for all training samples. This should add some 'robustness' to our text decoder which has to infer the correct target transcription from Wav2Vec2 output sequences of slightly varying length.\r\n\r\nYou can also disable layer drop by setting `layerdrop=0.0` in the config: https://huggingface.co/facebook/wav2vec2-base-960h/blob/main/config.json#L59",
"Thanks @sanchit-gandhi !"
] | 1,669
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
### System Info
Hi!
As mentioned in https://github.com/huggingface/transformers/issues/20451, the layer dropout in `Wav2Vec2Adapter` may produce outputs with different lengths.
I understand the use of layerdrop in transformer structure, but do we need it in CNNs (`Wav2Vec2Adapter`)?
https://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1006-L1009
### Who can help?
cc @anton-l @patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I copied the reproduction code of @OllieBroadhurst in https://github.com/huggingface/transformers/issues/20451
```python
from transformers import Wav2Vec2Model
model = Wav2Vec2Model.from_pretrained("anton-l/wav2vec2-base-lang-id",
add_adapter=True,
adapter_stride=2,
adapter_kernel_size=3,
num_adapter_layers=2)
model.train() # NB
dummy_input = torch.randn((1, 16000))
expected_output_sequence_length = 13
for _ in range(200):
output_shape = model(input_values=dummy_input)[0].shape[1]
if output_shape != expected_output_sequence_length:
print(output_shape)
```
### Expected behavior
The above loop shouldn't print anything out.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20505/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20504
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20504/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20504/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20504/events
|
https://github.com/huggingface/transformers/pull/20504
| 1,469,117,958
|
PR_kwDOCUB6oc5D8YO1
| 20,504
|
fix ipex+fp32 jit trace model inference error in ipex 1.13
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@jianan-gu ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
error show like: βCurrently the auto_kernel_selection does not support the grad mode! Please add torch.no_grad() before the inference runtime..β since jit mode only work in inference mode, it's safe to add such logic.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20504/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20504",
"html_url": "https://github.com/huggingface/transformers/pull/20504",
"diff_url": "https://github.com/huggingface/transformers/pull/20504.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20504.patch",
"merged_at": 1669816682000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20503
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20503/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20503/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20503/events
|
https://github.com/huggingface/transformers/issues/20503
| 1,468,972,559
|
I_kwDOCUB6oc5Xjr4P
| 20,503
|
Atlas: Few-shot Learning with Retrieval Augmented Language Model
|
{
"login": "ae99",
"id": 30190922,
"node_id": "MDQ6VXNlcjMwMTkwOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/30190922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ae99",
"html_url": "https://github.com/ae99",
"followers_url": "https://api.github.com/users/ae99/followers",
"following_url": "https://api.github.com/users/ae99/following{/other_user}",
"gists_url": "https://api.github.com/users/ae99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ae99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ae99/subscriptions",
"organizations_url": "https://api.github.com/users/ae99/orgs",
"repos_url": "https://api.github.com/users/ae99/repos",
"events_url": "https://api.github.com/users/ae99/events{/privacy}",
"received_events_url": "https://api.github.com/users/ae99/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi all!\r\n\r\nSuper appreciative of the authors for open-sourcing this model, really exciting stuff.\r\n\r\nI'm planning on having a go at implementing this model here. Aware there are others who have been looking at similar models in the past (https://github.com/huggingface/transformers/issues/15387), so thought it good to get this ticket in early in case you are also interested in working on this!",
"go for it! it shouldnt be too hard to get inference working - training may be more involved - the way we do the distributed index might be a little painful to integrate gracefully.\r\ngood luck!\r\n\r\nPlease make sure that you provide links to the original repo prominently, and try to make sure the models are 1) capable of achieving the same accuracy that they do in our repo 2) mathematically preform the same computations. \r\n\r\n",
"Hello, is ATLAS a part of huggingface now?"
] | 1,669
| 1,680
| null |
NONE
| null |
### Model description
Atlas is a retrieval-augmented seq2seq language model comprised of a Contriever retriever and fusion-in-decoder (FID) architecture (which uses T5), introduced in the paper [Atlas: Few-shot Learning with Retrieval Augmented Language Models](https://arxiv.org/pdf/2208.03299.pdf)
From the papers abstract:
> Large language models have shown impressive few-shot results on a wide range of tasks.
However, when knowledge is key for such results, as is the case for tasks such as question
answering and fact checking, massive parameter counts to store knowledge seem to be needed.
Retrieval augmented models are known to excel at knowledge intensive tasks without the
need for as many parameters, but it is unclear whether they work in few-shot settings. In this
work we present Atlas, a carefully designed and pre-trained retrieval augmented language
model able to learn knowledge intensive tasks with very few training examples. We perform
evaluations on a wide range of tasks, including MMLU, KILT and NaturalQuestions, and
study the impact of the content of the document index, showing that it can easily be updated.
Notably, Atlas reaches over 42% accuracy on Natural Questions using only 64 examples,
outperforming a 540B parameters model by 3% despite having 50x fewer parameters.
### Open source status
- [X] The model implementation is available https://github.com/facebookresearch/atlas
- [X] The model weights are available https://github.com/facebookresearch/atlas
### Provide useful links for the implementation
Open-sourced implementation from Meta https://github.com/facebookresearch/atlas, with weights available.
Authored by @patrick-s-h-lewis and @gizacard
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20503/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20503/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/20502
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20502/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20502/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20502/events
|
https://github.com/huggingface/transformers/issues/20502
| 1,468,879,211
|
I_kwDOCUB6oc5XjVFr
| 20,502
|
HTTPS request to model repo despite local_files_only=T
|
{
"login": "asheetal",
"id": 25741779,
"node_id": "MDQ6VXNlcjI1NzQxNzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25741779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asheetal",
"html_url": "https://github.com/asheetal",
"followers_url": "https://api.github.com/users/asheetal/followers",
"following_url": "https://api.github.com/users/asheetal/following{/other_user}",
"gists_url": "https://api.github.com/users/asheetal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asheetal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asheetal/subscriptions",
"organizations_url": "https://api.github.com/users/asheetal/orgs",
"repos_url": "https://api.github.com/users/asheetal/repos",
"events_url": "https://api.github.com/users/asheetal/events{/privacy}",
"received_events_url": "https://api.github.com/users/asheetal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You need to upgrade your version of Transformers as it's pretty old and many bugs with the cache system have been fixed since then.",
"Thank you. Upgrade fixes the problem"
] | 1,669
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.9.2
- Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.0a0+git36449ea (True)
- Tensorflow version (GPU?): 2.4.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: none
### Who can help?
@mrm8488
I am trying to to Q&A in a very large text so I chunk the text and do Q&A in smaller chunks.
I have a pipeline using this model embedded in a function that I call in a loop.
```
my_guru <- function(my_sents, my_topk, my_question) {
sequence_length <- lengths(gregexpr("\\W+", my_sents)) + 1
print(sequence_length)
assert("Sequence length < 4096", sequence_length < 4096)
text <- reticulate::import("tensorflow_text")
transformers <- reticulate::import("transformers")
torch <- reticulate::import("torch")
model <- transformers$AutoModelForQuestionAnswering$from_pretrained("mrm8488/longformer-base-4096-finetuned-squadv2",
#low_cpu_mem_usage=FALSE,
local_files_only=T)
tokenizer <- transformers$AutoTokenizer$from_pretrained("mrm8488/longformer-base-4096-finetuned-squadv2",
truncation = FALSE,
padding='max_length',
local_files_only=T)
guru <- transformers$pipeline("question-answering",
model=model,
tokenizer=tokenizer,
device=0L)
answers <- guru(context = my_sents, question = my_question, top_k = my_topk)
rm(tokenizer)
gc()
torch$cuda$empty_cache()
return(answers)
}
````
Since I am using local_files_only=T. I should expect it to run over the days and complete without going to internet. However after looping a few thousand times, It generates an error and crashes out
```
/usr/local/lib/python3.8/dist-packages/transformers/pipelines/question_answering.py:316: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:198.)
fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()}
Error in py_call_impl(callable, dots$args, dots$keywords) :
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/models/mrm8488/longformer-base-4096-finetuned-squadv2
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
set local files to TRUE
repeat the pipeline over thousands of times
### Expected behavior
The script should not send a request to huggingface when local files is set to TRUE
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20502/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20501
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20501/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20501/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20501/events
|
https://github.com/huggingface/transformers/pull/20501
| 1,468,849,214
|
PR_kwDOCUB6oc5D7fim
| 20,501
|
Update doc examples feature extractor -> image processor
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
Replaces vision feature extractor references with image processors throughout transformers documentation.
Places where changes didn't happen:
* Vision models which use a `Processor` class. Some processor classes still have `feature_extractor_class` property, to be removed in future.
* `examples/...` - required changes to code outside to scope of this PR and dependant on some changes to `Processor` class
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20501/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20501",
"html_url": "https://github.com/huggingface/transformers/pull/20501",
"diff_url": "https://github.com/huggingface/transformers/pull/20501.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20501.patch",
"merged_at": 1669819856000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20500
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20500/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20500/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20500/events
|
https://github.com/huggingface/transformers/issues/20500
| 1,468,844,757
|
I_kwDOCUB6oc5XjMrV
| 20,500
|
Unstable generation results when using Top-p decoding
|
{
"login": "bilalghanem",
"id": 47889448,
"node_id": "MDQ6VXNlcjQ3ODg5NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/47889448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilalghanem",
"html_url": "https://github.com/bilalghanem",
"followers_url": "https://api.github.com/users/bilalghanem/followers",
"following_url": "https://api.github.com/users/bilalghanem/following{/other_user}",
"gists_url": "https://api.github.com/users/bilalghanem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilalghanem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilalghanem/subscriptions",
"organizations_url": "https://api.github.com/users/bilalghanem/orgs",
"repos_url": "https://api.github.com/users/bilalghanem/repos",
"events_url": "https://api.github.com/users/bilalghanem/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilalghanem/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Gently pinging @gante here",
"Hi @bilalghanem π \r\n\r\nTop-p has randomness -- when using `top_p=0.9`, `generate()` picks a token among the top candidate tokens, where the sum of the probability of those top candidates is >= 0.9. In other words, unless you model predicts a token with probability > 0.9 at each generation step, it will not be deterministic.\r\n\r\nI'd recommend reading this blog post: https://github.com/huggingface/blog/blob/main/how-to-generate.md",
"> \r\n\r\nThanks @gante.\r\nCan you clarify, how it won't be deterministic if we don't find a token that satisfies the P condition? ",
"@bilalghanem It considers all tokens, from most to least likely, such that its summed probability is `top_p`, and samples (proportionally) from those tokens.\r\n\r\nConsider the following logits array: `[0.5, 0.4, 0.1]`. If you run sampling with `top_p=0.9`, it will pick the first token `(0.5/0.9)*100 = 55.6%` of the times, the second token `(0.4/0.9)*100 = 44.4%` of the times, and the last token `0%` of the times.\r\n\r\nIf you want a deterministic behavior, use `do_sample=False`. Again, I'd recommend reading the following blog post, which explains all of this: https://github.com/huggingface/blog/blob/main/how-to-generate.md\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,669
| 1,676
| 1,676
|
NONE
| null |
### System Info
I trained a T5-base model for a translation task. I use Top-p decoding strategy to generate text.
Something is weird in the model; every time I ask the model to generate text for the given same input, it generates a different text. When I fix the random seed, the model starts to generate the exact text every time.
My question is, why the model generates different texts for the same input if the random seed is not fixed? Top-P decoding strategy has no randomness.
```
y = data['target_ids'].to(device, dtype=torch.long)
ids = data['source_ids'].to(device, dtype=torch.long)
mask = data['source_mask'].to(device, dtype=torch.long)
generated_ids = model.generate(input_ids=ids, attention_mask=mask, max_length=512, do_sample=True, top_p=0.9, top_k=0, num_return_sequences=1)
```
@patrickvonplaten
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I just trained the T5 model on a translation task and then generate text using the above code.
### Expected behavior
The model generates the same text even if I don't fix the random seed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20500/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20499
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20499/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20499/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20499/events
|
https://github.com/huggingface/transformers/issues/20499
| 1,468,729,767
|
I_kwDOCUB6oc5Xiwmn
| 20,499
|
ValueError: Expected input batch_size (8) to match target batch_size (1008).
|
{
"login": "gngpostalsrvc",
"id": 82219143,
"node_id": "MDQ6VXNlcjgyMjE5MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/82219143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gngpostalsrvc",
"html_url": "https://github.com/gngpostalsrvc",
"followers_url": "https://api.github.com/users/gngpostalsrvc/followers",
"following_url": "https://api.github.com/users/gngpostalsrvc/following{/other_user}",
"gists_url": "https://api.github.com/users/gngpostalsrvc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gngpostalsrvc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gngpostalsrvc/subscriptions",
"organizations_url": "https://api.github.com/users/gngpostalsrvc/orgs",
"repos_url": "https://api.github.com/users/gngpostalsrvc/repos",
"events_url": "https://api.github.com/users/gngpostalsrvc/events{/privacy}",
"received_events_url": "https://api.github.com/users/gngpostalsrvc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Disabling one-hot encoding (`encoding['labels'] = [[stage] for stage in examples['Stage']]`) and the data collator (`# data_collator=data_collator`) seems to resolve this issue.\r\n\r\nNot sure if labels should be sparsely encoded and what the data_collator does to create this error, maybe it 'collates' on the wrong axis?"
] | 1,669
| 1,676
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger @lys
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am fine-tuning a custom model for multiclass classification. When I run the sixth cell of [this collab notebook](https://colab.research.google.com/drive/1NYk_RJcZ3GmwYQTv9X3kcG_FbHDjWvsd?usp=sharing), I get the following error:
```
ValueError Traceback (most recent call last)
[<ipython-input-17-f9d56f5f4088>](https://localhost:8080/#) in <module>
27 )
28
---> 29 trainer.train()
30
31 trainer.push_to_hub()
8 frames
[/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1503 resume_from_checkpoint=resume_from_checkpoint,
1504 trial=trial,
-> 1505 ignore_keys_for_eval=ignore_keys_for_eval,
1506 )
1507
[/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1747 tr_loss_step = self.training_step(model, inputs)
1748 else:
-> 1749 tr_loss_step = self.training_step(model, inputs)
1750
1751 if (
[/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in training_step(self, model, inputs)
2506
2507 with self.compute_loss_context_manager():
-> 2508 loss = self.compute_loss(model, inputs)
2509
2510 if self.args.n_gpu > 1:
[/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in compute_loss(self, model, inputs, return_outputs)
2538 else:
2539 labels = None
-> 2540 outputs = model(**inputs)
2541 # Save past state if it exists
2542 # TODO: this needs to be fixed and made cleaner later.
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
1238 elif self.config.problem_type == "single_label_classification":
1239 loss_fct = CrossEntropyLoss()
-> 1240 loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
1241 elif self.config.problem_type == "multi_label_classification":
1242 loss_fct = BCEWithLogitsLoss()
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py](https://localhost:8080/#) in forward(self, input, target)
1164 return F.cross_entropy(input, target, weight=self.weight,
1165 ignore_index=self.ignore_index, reduction=self.reduction,
-> 1166 label_smoothing=self.label_smoothing)
1167
1168
[/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
3012 if size_average is not None or reduce is not None:
3013 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3014 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
3015
3016
ValueError: Expected input batch_size (8) to match target batch_size (1008).
```
### Expected behavior
I expected training to continue as usual.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20499/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20498
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20498/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20498/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20498/events
|
https://github.com/huggingface/transformers/pull/20498
| 1,468,616,031
|
PR_kwDOCUB6oc5D6s2a
| 20,498
|
Repurpose torchdynamo training args towards torch._dynamo
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
This PR re-uses the current `torchdynamo` training argument to be compatible with the internal module of PyTorch (in the nightlies). This is slightly breaking but at the same time the torchdynamo package has migrated to PyTorch proper, and the integration was marked as experimental.
The "fx2trt-fp16" backend is not advertised by PyTorch, so I removed it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20498/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20498",
"html_url": "https://github.com/huggingface/transformers/pull/20498",
"diff_url": "https://github.com/huggingface/transformers/pull/20498.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20498.patch",
"merged_at": 1669824645000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20497
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20497/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20497/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20497/events
|
https://github.com/huggingface/transformers/pull/20497
| 1,468,577,610
|
PR_kwDOCUB6oc5D6kqT
| 20,497
|
Fix disk offload for full safetensors checkpoints
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
#20321 was only tested with safetensors checkpoints containing multiple shards. The code failed for full checkpoints, this PR fixes it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20497/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20497",
"html_url": "https://github.com/huggingface/transformers/pull/20497",
"diff_url": "https://github.com/huggingface/transformers/pull/20497.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20497.patch",
"merged_at": 1669751910000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20496
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20496/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20496/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20496/events
|
https://github.com/huggingface/transformers/pull/20496
| 1,468,517,021
|
PR_kwDOCUB6oc5D6XhL
| 20,496
|
[modelcard] Set model name if empty
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,687
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
When building the model card, if the `model_name` is unspecified we set it to `training_args.output_dir`:
https://github.com/huggingface/transformers/blob/86e435bbb1e54f169351dbb798141afee7fa1b93/src/transformers/modelcard.py#L592-L593
This is typically the case during intermediate push to Hubs during training (when we don't specify any extra push to hub kwargs).
However, if we're fine-tuning from **within** a model repo, we set `--output_dir=./`. This means that `Path(trainer.args.output_dir).name=""`, and so `model_name=""`.
This causes a problem when we try and push the model card to the Hub: a model name of `""` registers as an **empty** model index name, meaning the push is rejected:
```bash
remote: ----------------------------------------------------------
remote: Sorry, your push was rejected during YAML metadata verification:
remote: - Error: "model-index[0].name" is not allowed to be empty
remote: ----------------------------------------------------------
remote: Please find the documentation at:
remote: https://huggingface.co/docs/hub/model-cards#model-card-metadata
remote: ----------------------------------------------------------
```
This PR sets the `model_name` to `finetuned_from` in the case that it is empty (`""`), meaning that the push to hub is allowed.
Unless there's a neater way of inferring the model repo id, this is probably the best way of preventing rejected push to Hubs?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20496/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20496/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20496",
"html_url": "https://github.com/huggingface/transformers/pull/20496",
"diff_url": "https://github.com/huggingface/transformers/pull/20496.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20496.patch",
"merged_at": 1669802143000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20495
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20495/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20495/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20495/events
|
https://github.com/huggingface/transformers/pull/20495
| 1,468,501,770
|
PR_kwDOCUB6oc5D6URz
| 20,495
|
[modelcard] Check for IterableDataset
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,687
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds check for a HF IterableDataset (i.e. a HF dataset in streaming mode).
Required to build the model card when training models with streaming mode!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20495/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20495",
"html_url": "https://github.com/huggingface/transformers/pull/20495",
"diff_url": "https://github.com/huggingface/transformers/pull/20495.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20495.patch",
"merged_at": 1669802107000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20494
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20494/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20494/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20494/events
|
https://github.com/huggingface/transformers/issues/20494
| 1,468,498,808
|
I_kwDOCUB6oc5Xh4N4
| 20,494
|
PyTorch training scripts freeze when preprocessing_num_workers > 1
|
{
"login": "Lokiiiiii",
"id": 36520926,
"node_id": "MDQ6VXNlcjM2NTIwOTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/36520926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lokiiiiii",
"html_url": "https://github.com/Lokiiiiii",
"followers_url": "https://api.github.com/users/Lokiiiiii/followers",
"following_url": "https://api.github.com/users/Lokiiiiii/following{/other_user}",
"gists_url": "https://api.github.com/users/Lokiiiiii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lokiiiiii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lokiiiiii/subscriptions",
"organizations_url": "https://api.github.com/users/Lokiiiiii/orgs",
"repos_url": "https://api.github.com/users/Lokiiiiii/repos",
"events_url": "https://api.github.com/users/Lokiiiiii/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lokiiiiii/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Any update here ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,669
| 1,673
| 1,673
|
NONE
| null |
### System Info
transformers 4.24.0
datasets 2.7.1
Dockerfile: https://github.com/aws/deep-learning-containers/blob/master/pytorch/training/docker/1.12/py3/cu113/Dockerfile.gpu
### Who can help?
@sgugger, @patil-sura
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
### Local Repro
`python3.8 run_mlm.py --dataloader_drop_last True --dataset_config_name wikitext-103-v1 --dataset_name wikitext --do_train True --fp16 True --max_seq_length 512 --model_name_or_path bert-base-uncased --num_train_epochs 16 --per_device_train_batch_size 32 --preprocessing_num_workers 12
`
### AWS Repro
`
!pip install sagemaker
from sagemaker.pytorch import PyTorch
PyTorch(
framework_version='1.12',
py_version="py38",
instance_type="ml.p4d.24xlarge",
distribution={"pytorchddp": {"enabled": True}},
source_dir="examples/pytorch/language-modeling",
entry_point="run_mlm.py",
hyperparameters={
'dataset_name': 'wikitext',
'dataset_config_name': 'wikitext-103-v1',
'do_train': True,
'fp16': True,
'model_name_or_path': 'bert-base-uncased',
'num_train_epochs': 10,
'per_device_train_batch_size': 32,
'preprocessing_num_workers': 12,
},
).fit()
`
### Expected behavior
Training to completion without stalls/freezes at data preprocessing.
Currently the training stalls with the last log line being:
```
Grouping texts in chunks of 512
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20494/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/20494/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20493
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20493/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20493/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20493/events
|
https://github.com/huggingface/transformers/pull/20493
| 1,468,486,459
|
PR_kwDOCUB6oc5D6Q-5
| 20,493
|
[CI, WHISPER] fix the latest failing test
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
In a recent update, we followed the original code which changed some of the suppress tokens for better performances. This lead to a small change in the output of on particular case. Tested with the original code and we have the correct output now!
See [here](https://huggingface.co/openai/whisper-large/commit/ed97120f929257fb801f99587ada69be0daf5b0a) for the particular commit
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20493/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20493/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20493",
"html_url": "https://github.com/huggingface/transformers/pull/20493",
"diff_url": "https://github.com/huggingface/transformers/pull/20493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20493.patch",
"merged_at": 1669817248000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20492
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20492/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20492/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20492/events
|
https://github.com/huggingface/transformers/pull/20492
| 1,468,434,115
|
PR_kwDOCUB6oc5D6Fzo
| 20,492
|
Support extraction of both train and eval XLA graphs
|
{
"login": "jeffhataws",
"id": 56947987,
"node_id": "MDQ6VXNlcjU2OTQ3OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/56947987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffhataws",
"html_url": "https://github.com/jeffhataws",
"followers_url": "https://api.github.com/users/jeffhataws/followers",
"following_url": "https://api.github.com/users/jeffhataws/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffhataws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffhataws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffhataws/subscriptions",
"organizations_url": "https://api.github.com/users/jeffhataws/orgs",
"repos_url": "https://api.github.com/users/jeffhataws/repos",
"events_url": "https://api.github.com/users/jeffhataws/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffhataws/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks so much @sgugger ! "
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
Neuron supports extraction of XLA graphs for compilation. However, when both do_train and do_eval options are enabled, sizes returned by tensor operator can be 0. To avoid INVALID_ARGUMENT error, we use inequality in the check whether a tensor needs padding or not.
# What does this PR do?
This PR reduces compilation time of Hugging Face training/evaluation on Trainium using Neuron SDK.
Neuron SDK enables Hugging Face training on Trainium. To reduce compilation time, we have an optional [parallel compilation step](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html) which 1) extracts XLA HLO graphs by trial execution of the training script with stub graphs that output zeros only, 2) perform parallel compilations of the graphs, and 3) place compiled graphs into Neuron cache. Currently, this flow only works for do_train step in the [HF trainer API tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html#) by itself but encounters INVALID_ARGUMENT error when do_eval is included together with do_train.
The error during parallel compilation is due to code at https://github.com/huggingface/transformers/blob/61a51f5f23d7ce6b8acf61b5aa170e01d7658d74/src/transformers/trainer.py#L3147 that creates new tensor based on the shape of another tensor. The tensor is created but it values are zero (as opposed to the shape) during parallel compilation (trial execution of stub graphs that output zeros only).
This PR introduces an inequality in the check for whether a tensor needs padding or not. During normal execution on all platforms, the max_size is greater than or equal to the tensor size so in no cases should max_size be smaller than tensor size, except in our case where we do trial execution of stub graphs that output zeros only.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20492/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20492",
"html_url": "https://github.com/huggingface/transformers/pull/20492",
"diff_url": "https://github.com/huggingface/transformers/pull/20492.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20492.patch",
"merged_at": 1669815827000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20491
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20491/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20491/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20491/events
|
https://github.com/huggingface/transformers/pull/20491
| 1,468,394,338
|
PR_kwDOCUB6oc5D58-V
| 20,491
|
Fix documentation code to import facebook/detr-resnet-50 model
|
{
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
Changes example import line
`>>> model = DetrForObjectDetection.from_pretrained("facebook/resnet-50")`
to
`>>> model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")`
As trying to import `"facebook/resnet-50"` raises:
```
OSError: facebook/resnet-50 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20491/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20491/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20491",
"html_url": "https://github.com/huggingface/transformers/pull/20491",
"diff_url": "https://github.com/huggingface/transformers/pull/20491.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20491.patch",
"merged_at": 1669746626000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20490
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20490/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20490/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20490/events
|
https://github.com/huggingface/transformers/pull/20490
| 1,468,334,601
|
PR_kwDOCUB6oc5D5v6g
| 20,490
|
fixed small typo
|
{
"login": "sandeepgadhwal",
"id": 18506968,
"node_id": "MDQ6VXNlcjE4NTA2OTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/18506968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sandeepgadhwal",
"html_url": "https://github.com/sandeepgadhwal",
"followers_url": "https://api.github.com/users/sandeepgadhwal/followers",
"following_url": "https://api.github.com/users/sandeepgadhwal/following{/other_user}",
"gists_url": "https://api.github.com/users/sandeepgadhwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sandeepgadhwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sandeepgadhwal/subscriptions",
"organizations_url": "https://api.github.com/users/sandeepgadhwal/orgs",
"repos_url": "https://api.github.com/users/sandeepgadhwal/repos",
"events_url": "https://api.github.com/users/sandeepgadhwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/sandeepgadhwal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a small typo in VAN model.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
Models:
- van
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20490/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20490",
"html_url": "https://github.com/huggingface/transformers/pull/20490",
"diff_url": "https://github.com/huggingface/transformers/pull/20490.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20490.patch",
"merged_at": 1669739713000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20489
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20489/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20489/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20489/events
|
https://github.com/huggingface/transformers/pull/20489
| 1,468,182,262
|
PR_kwDOCUB6oc5D5PCL
| 20,489
|
Fix minimum version for device_map
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
Turns out it all works fine with PyTorch 1.10 which contains `torch.cuda.mem_get_info` used by Accelerate (but this isn't in the documentation of PyTorch 1.10).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20489/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/20489/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20489",
"html_url": "https://github.com/huggingface/transformers/pull/20489",
"diff_url": "https://github.com/huggingface/transformers/pull/20489.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20489.patch",
"merged_at": 1669824656000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20488
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20488/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20488/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20488/events
|
https://github.com/huggingface/transformers/pull/20488
| 1,467,949,790
|
PR_kwDOCUB6oc5D4coR
| 20,488
|
remove truncation in whisper
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @ArthurZucker ,\r\n\r\nJust got an error, seems related to this issue?\r\n\r\n`RuntimeError: The size of tensor a (507) must match the size of tensor b (448) at non-singleton dimension 1`",
"@RK-BAKU This PR is not merged yet. Are you trying this PR instead of a stable release or the `main` branch?",
"Hey! @RK-BAKU Could you provide a reproducing script and open a separate issue ? "
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
remove truncation in whisper
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20488/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20488",
"html_url": "https://github.com/huggingface/transformers/pull/20488",
"diff_url": "https://github.com/huggingface/transformers/pull/20488.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20488.patch",
"merged_at": 1669805162000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20487
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20487/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20487/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20487/events
|
https://github.com/huggingface/transformers/pull/20487
| 1,467,802,279
|
PR_kwDOCUB6oc5D381Q
| 20,487
|
extract warnings in GH workflows
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
This PR uses the change in #20474 to collect the warnings in our GH scheduled daily CI, and provide a button in the slack reports to access this information.
<img width="493" alt="image" src="https://user-images.githubusercontent.com/2521628/204501762-1ed46a7c-4a91-40ba-9b16-3288168f0dfc.png">
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20487/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20487",
"html_url": "https://github.com/huggingface/transformers/pull/20487",
"diff_url": "https://github.com/huggingface/transformers/pull/20487.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20487.patch",
"merged_at": 1669733935000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20486
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20486/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20486/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20486/events
|
https://github.com/huggingface/transformers/pull/20486
| 1,467,782,533
|
PR_kwDOCUB6oc5D34jZ
| 20,486
|
fix cuda OOM by using single Prior
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This will require #20485 to be merged, otherwise the logits for `fp16_sampling` will differ. ",
"I will wait the mentioned PR #20485 :-) then back to this one. Also see my comment in that PR π @ArthurZucker π ",
"I'll just push to model to another repo and ping you back here! "
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
Fixes the OOM issue with the `5b` model and `fp16` sampling.
Also fixes the slow generation test by sending each prior to `cuda` only when they are actually used.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20486/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20486",
"html_url": "https://github.com/huggingface/transformers/pull/20486",
"diff_url": "https://github.com/huggingface/transformers/pull/20486.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20486.patch",
"merged_at": 1669968345000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20485
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20485/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20485/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20485/events
|
https://github.com/huggingface/transformers/pull/20485
| 1,467,755,579
|
PR_kwDOCUB6oc5D3yvL
| 20,485
|
[CORE] Use model prefix instead of cls
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @ArthurZucker \r\n\r\nFor this issue and the fix in the PR, it's would be very nice if you could provide a code snippet to demonstrate what is currently wrong to justify the fix. It will also help future developers (either inside HF or external contributors) to understand it much better and faster (if they ever track back to this PR for some reason).\r\n\r\nThank you, looking forward for it!\r\n\r\n",
"Discussed internally with @ArthurZucker. I am very much against using a \"dynamic\" attribute instead of the class attribute.",
"Yep, closing this! \r\nSnipper is impossible to provide as it is very specific to jukebox and an invisible bug! ",
"Since you close the PR, I would not ask the code snippet. It's still somehow strange that it is impossible to give a code snippet. From the description, it looks like if we have a checkpoint, when loading it, we will get some weights not being loaded correctly. \r\n\r\nOne possible way is to create a model, save it, and reload it. Then point out which weights are not loaded correctly.\r\nI might miss many details here, and things may not be so easy. But if you are able to find the invisible bug, it's no longer invisible @ArthurZucker .\r\n\r\nAnd if it is completely different thing than I imagine, just ignore me :-)",
"No, the weights are not loaded correctly but the error is silent. Here is a snippet (but nothing will be outputed) \r\n\r\n```python \r\n>>> from transformers import JukeboxPrior, JukeboxModel\r\n>>> model = JukeboxModel.from_pretrained(\"openai/jukebox-5b-lyrics\").priors[0]\r\n>>> prior = JukeboxPrior.from_pretrained(\"openai/jukebox-5b-lyrics\")\r\n>>> assert model.encoder.start_token == prior.encoder.start_token\r\n```\r\n\r\nThere will be no `missing` or `unexpected keys` but the weights will not be loaded",
"OK, it's silent, but you speak for it so now it's clear π― !"
] | 1,669
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
This adresses a very particular bug found when `base_model_prefix` is specific to the instance.
The `JukeboxPrior` are defined by their level of generation. Thus to each level, the corresponding base model prefix : `priors.0`.
When load the checkpoints from a pretrained `JukeboxModel`, the `_load_pretrained_model` uses the `cls.base_model_prefix` while the `model.base_model_prefix` is also always available. This means that the weights will [not be properly loaded](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L472), but it will be silent.
This should adresse any current and futur model loading issues where the `base_model_prefix` is modified per instance.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20485/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20485",
"html_url": "https://github.com/huggingface/transformers/pull/20485",
"diff_url": "https://github.com/huggingface/transformers/pull/20485.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20485.patch",
"merged_at": null
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.