url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/21185
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21185/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21185/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21185/events
|
https://github.com/huggingface/transformers/issues/21185
| 1,548,622,972
|
I_kwDOCUB6oc5cThx8
| 21,185
|
"text2text-generation" pipeline fails when setting return_dict_in_generate=True
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is only valid if we indeed have the argument `return_dict_in_generate`. Otherwise the pipeline will also fail because `output_ids` will not be a dictionary. Pipelines in general currently don't support outputting anything else than the text prediction. See #21274. @Narsil do you think we could support something like `output_generate_dict` which would just output everything? (might be useful for people who want to use all in one tokenizer, feature extractor and model but still post process)",
"> want to use all in one tokenizer, feature extractor and model but still post process\r\n\r\nFeels a bit power usery to me.\r\n\r\nTwo options :\r\n\r\n- Subclass pipeline and use it instead `pipeline(..., pipeline_class=MyOwnClass)` which will use your subclass where everything is free to modify (and still benefit from batching and such).\r\n- Make it shareable to the world with a custom pipeline: https://huggingface.co/docs/transformers/v4.26.0/en/add_new_pipeline#how-to-create-a-custom-pipeline\r\n\r\nThere are **many** things that could be done within text-generation pipeline, but I fear we should be very sparse in what we agree to add and maintain. The main goal of the pipeline is to be useable by non-ML people, meaning, we need to refrain from adding many use cases which include understanding how the tokens work. Advances usage is always possible with lower level tools and I feel that's where they belong.\r\n\r\nDoes that make sense ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.9.4
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Setting `return_dict_in_generate` to `True` in the `text2text-generation` pipeline returns the following error:
```
Traceback (most recent call last):
File "/Users/karimfoda/.asdf/installs/python/3.9.4/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/karimfoda/.asdf/installs/python/3.9.4/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/scripts/debug_return_dict_in_generate_error.py", line 6, in <module>
print(sentiment_t5_model("hello this is a test", return_dict_in_generate=True, output_scores = True))
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/_env/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py", line 148, in __call__
result = super().__call__(*args, **kwargs)
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/_env/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1074, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/_env/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1081, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/_env/lib/python3.9/site-packages/transformers/pipelines/base.py", line 990, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/_env/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py", line 173, in _forward
output_ids = output_ids.reshape(in_b, out_b // in_b, *output_ids[0].shape[1:])
AttributeError: 'GreedySearchEncoderDecoderOutput' object has no attribute 'reshape'
```
Running the following code reproduces this error:
```
from transformers import pipeline, AutoTokenizer, AutoModelWithLMHead, AutoModelForCausalLM
sentiment_t5_model = pipeline("text2text-generation", model = "mrm8488/t5-base-finetuned-imdb-sentiment")
print(sentiment_t5_model("hello this is a test", return_dict_in_generate=True, output_scores = True))
```
### Expected behavior
The expected output is:
`[{'generated_text': 'positive', 'scores': (tensor([[-19.5107, -12.7762, -13.3044, ..., -41.9292, -41.8459, -41.9196]]), tensor([[-69.0289, -8.4889, -31.7621, ..., -75.5579, -75.6114, -75.5323]]))}]`
I was able to produce this output and fix this issue by changing:
https://github.com/huggingface/transformers/blob/6d67664380c09a1e9e1e3771f2124cd49b72f6be/src/transformers/pipelines/text2text_generation.py#L188-L192
to:
```
out_b = output_ids['sequences'].shape[0]
if self.framework == "pt":
output_ids['sequences'] = output_ids['sequences'].reshape(in_b, out_b // in_b, *output_ids['sequences'].shape[1:])
elif self.framework == "tf":
output_ids['sequences'] = tf.reshape(output_ids, (in_b, out_b // in_b, *output_ids['sequences'].shape[1:]))
```
and
https://github.com/huggingface/transformers/blob/6d67664380c09a1e9e1e3771f2124cd49b72f6be/src/transformers/pipelines/text2text_generation.py#L201-L206
to:
```
record = {
f"{self.return_name}_text": self.tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
),
"scores":model_outputs["output_ids"]['scores']
}
```
if this is an acceptable fix happy to submit a PR for these changes.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21185/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21184
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21184/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21184/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21184/events
|
https://github.com/huggingface/transformers/issues/21184
| 1,548,356,659
|
I_kwDOCUB6oc5cSgwz
| 21,184
|
ImportError: cannot import name 'LayoutLMv3ForTokenClassification' from 'transformers' (unknown location)
|
{
"login": "Harikishore-KA",
"id": 17734294,
"node_id": "MDQ6VXNlcjE3NzM0Mjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/17734294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Harikishore-KA",
"html_url": "https://github.com/Harikishore-KA",
"followers_url": "https://api.github.com/users/Harikishore-KA/followers",
"following_url": "https://api.github.com/users/Harikishore-KA/following{/other_user}",
"gists_url": "https://api.github.com/users/Harikishore-KA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Harikishore-KA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Harikishore-KA/subscriptions",
"organizations_url": "https://api.github.com/users/Harikishore-KA/orgs",
"repos_url": "https://api.github.com/users/Harikishore-KA/repos",
"events_url": "https://api.github.com/users/Harikishore-KA/events{/privacy}",
"received_events_url": "https://api.github.com/users/Harikishore-KA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nI'm not able to reproduce this error. It might make sense to uninstall and install transformers in a new, clean environment.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
NONE
| null |
from transformers import LayoutLMv3ForTokenClassification
Unable to import LayoutLMv3 models
OS : Windows 10
Python : Python 3.9.4 (tags/v3.9.4:1f2e308, Apr 4 2021, 13:27:16) [MSC v.1928 64 bit (AMD64)] on win32
Package versions: transformers==4.26.0.dev0, torch==1.13.1

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21184/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21183
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21183/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21183/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21183/events
|
https://github.com/huggingface/transformers/issues/21183
| 1,548,274,046
|
I_kwDOCUB6oc5cSMl-
| 21,183
|
RuntimeError: Tensors must be contiguous error while finetuning with deepspeed.
|
{
"login": "FahriBilici",
"id": 28020526,
"node_id": "MDQ6VXNlcjI4MDIwNTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/28020526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FahriBilici",
"html_url": "https://github.com/FahriBilici",
"followers_url": "https://api.github.com/users/FahriBilici/followers",
"following_url": "https://api.github.com/users/FahriBilici/following{/other_user}",
"gists_url": "https://api.github.com/users/FahriBilici/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FahriBilici/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FahriBilici/subscriptions",
"organizations_url": "https://api.github.com/users/FahriBilici/orgs",
"repos_url": "https://api.github.com/users/FahriBilici/repos",
"events_url": "https://api.github.com/users/FahriBilici/events{/privacy}",
"received_events_url": "https://api.github.com/users/FahriBilici/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"1.3B param of weights + grads + optim states in mixed precision would need about `18*1.3=24`GB of memory, plus you need more memory for activations and temps and cuda kernels.\r\n\r\nThe free colab account is too limited to do much on it with even a small model. It barely has any cpu memory so no memory to offload to.\r\n\r\nYou could use deepspeed to offload to local disk (`nvme`), it'll be slow but doable I think if your local disc is large enough. Please see: https://huggingface.co/docs/transformers/main/main_classes/deepspeed#nvme-support\r\n\r\n",
"The other approach is to activate BNB's Adam, so it will cut down on a lot of optim states weights (2 bytes instead of 8) except the embedding params at full 8 bytes. so you will be looking at about 17GB for weights + grads + optim states in mixed precision - but it's still too large for colab without offloading.",
"my actual goal is finetuning gpt-j on google colab pro but since google colab uses credits I am experimenting with 1.3B on normal colab. I also used nvme settings with zero3 example but still I got the same error without aio part. if I add aio part I got `ValidationError: 1 validation error for DeepSpeedZeroConfig\r\naio\r\n extra fields not permitted (type=value_error.extra)`",
"Understood.\r\n\r\ndeepspeed's `nvme` offload requires `libaio`. \r\n\r\nAs we only integrate deepspeed, any questions about deepspeed functionality itself and errors such as above should be posted at https://github.com/microsoft/DeepSpeed/issues since we aren't the maintainers of deepspeed.\r\n\r\nThank you.",
"but still `RuntimeError: Tensors must be contiguous` happens. I saw you made merge about fixing this but that still happens.\r\n",
"I'm struggling here with supporting you, @FahriBilici - please kindly read \r\nhttps://github.com/huggingface/transformers/blob/main/ISSUES.md#the-github-issues\r\nand file a proper issue with the full traceback and invocation command, I will be able to help you then.\r\nThanks.",
"I will share my colab notebook and training set once I prepare.",
"I repeat what's needed is the command line and the full traceback. Thank you. ",
"the full error is \r\n```\r\nThe following columns in the training set don't have a corresponding argument in \"GPTNeoForCausalLM.forward\" and have been ignored: text. If text are not expected by \"GPTNeoForCausalLM.forward\", you can safely ignore this message.\r\nDetected ZeRO Offload and non-DeepSpeed optimizers: This combination should work as long as the custom optimizer has both CPU and GPU implementation (except LAMB)\r\n[2023-01-21 10:17:13,756] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.8.0, git-hash=unknown, git-branch=unknown\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n[<ipython-input-23-3435b262f1ae>](https://localhost:8080/#) in <module>\r\n----> 1 trainer.train()\r\n\r\n10 frames\r\n[/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py](https://localhost:8080/#) in broadcast(tensor, src, group, async_op)\r\n 1402 group_src_rank = get_group_rank(group, src)\r\n 1403 opts.rootRank = group_src_rank\r\n-> 1404 work = group.broadcast([tensor], opts)\r\n 1405 if async_op:\r\n 1406 return work\r\n\r\nRuntimeError: Tensors must be contiguous\r\n```\r\n\r\nmy config file is \r\n```\r\n{\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"offload_optimizer\": {\r\n \"device\": \"nvme\",\r\n \"nvme_path\": \"/local_nvme\",\r\n \"pin_memory\": true,\r\n \"buffer_count\": 4,\r\n \"fast_init\": false\r\n },\r\n \"offload_param\": {\r\n \"device\": \"nvme\",\r\n \"nvme_path\": \"/local_nvme\",\r\n \"pin_memory\": true,\r\n \"buffer_count\": 5,\r\n \"buffer_size\": 1e8,\r\n \"max_in_cpu\": 1e9\r\n },\r\n \"overlap_comm\": true,\r\n \"contiguous_gradients\": true,\r\n \"sub_group_size\": 1e9,\r\n \"reduce_bucket_size\": \"auto\",\r\n \"stage3_prefetch_bucket_size\": \"auto\",\r\n \"stage3_param_persistence_threshold\": \"auto\",\r\n \"stage3_max_live_parameters\": 1e9,\r\n \"stage3_max_reuse_distance\": 1e9,\r\n \"stage3_gather_16bit_weights_on_model_save\": true\r\n\r\n },\r\n \"train_batch_size\": \"auto\",\r\n \"train_micro_batch_size_per_gpu\": \"auto\"\r\n}\r\n```\r\nmy training code is \r\n\r\n```\r\nfrom transformers import TrainingArguments, Trainer\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"neo\",\r\n evaluation_strategy=\"epoch\",\r\n learning_rate=2e-5,\r\n num_train_epochs=10,\r\n weight_decay=0.01,\r\n gradient_checkpointing=True,\r\n deepspeed='config.json',\r\n report_to=None\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=tokenized_datasets['train'],\r\n eval_dataset=tokenized_datasets['validation'],\r\n data_collator=data_collator,\r\n)\r\ntrainer.train()\r\n\r\n```",
"ok, clearly we have a miscommunication here. I will try one last time.\r\n\r\nTo help you we need the **full traceback** and not the last line of it. ",
"```\r\nThe following columns in the training set don't have a corresponding argument in `GPTNeoForCausalLM.forward` and have been ignored: text. If text are not expected by `GPTNeoForCausalLM.forward`, you can safely ignore this message.\r\nDetected ZeRO Offload and non-DeepSpeed optimizers: This combination should work as long as the custom optimizer has both CPU and GPU implementation (except LAMB)\r\n[2023-01-21 10:17:13,756] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.8.0, git-hash=unknown, git-branch=unknown\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-23-3435b262f1ae> in <module>\r\n----> 1 trainer.train()\r\n\r\n10 frames\r\n/usr/local/lib/python3.8/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)\r\n 1525 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size\r\n 1526 )\r\n-> 1527 return inner_training_loop(\r\n 1528 args=args,\r\n 1529 resume_from_checkpoint=resume_from_checkpoint,\r\n\r\n/usr/local/lib/python3.8/dist-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)\r\n 1594 )\r\n 1595 if args.deepspeed:\r\n-> 1596 deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(\r\n 1597 self, num_training_steps=max_steps, resume_from_checkpoint=resume_from_checkpoint\r\n 1598 )\r\n\r\n/usr/local/lib/python3.8/dist-packages/transformers/deepspeed.py in deepspeed_init(trainer, num_training_steps, resume_from_checkpoint, inference)\r\n 342 )\r\n 343 \r\n--> 344 deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)\r\n 345 \r\n 346 if resume_from_checkpoint is not None:\r\n\r\n/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py in initialize(args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_params)\r\n 123 \r\n 124 if not isinstance(model, PipelineModule):\r\n--> 125 engine = DeepSpeedEngine(args=args,\r\n 126 model=model,\r\n 127 optimizer=optimizer,\r\n\r\n/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py in __init__(self, args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_params, dont_change_device)\r\n 299 \r\n 300 # Configure distributed model\r\n--> 301 self._configure_distributed_model(model)\r\n 302 \r\n 303 self._get_model_parameters()\r\n\r\n/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py in _configure_distributed_model(self, model)\r\n 1185 \r\n 1186 if not self.amp_enabled():\r\n-> 1187 self._broadcast_model()\r\n 1188 \r\n 1189 # check if parameters are duplicated in optimizer param_groups\r\n\r\n/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py in _broadcast_model(self)\r\n 1100 else:\r\n 1101 if torch.is_tensor(p) and is_replicated(p):\r\n-> 1102 dist.broadcast(p,\r\n 1103 groups._get_broadcast_src_rank(),\r\n 1104 group=self.data_parallel_group)\r\n\r\n/usr/local/lib/python3.8/dist-packages/deepspeed/comm/comm.py in log_wrapper(*args, **kwargs)\r\n 125 # Return the op, then stop the op's timer\r\n 126 try:\r\n--> 127 return func(*args, **kwargs)\r\n 128 finally:\r\n 129 if comms_logger.enabled:\r\n\r\n/usr/local/lib/python3.8/dist-packages/deepspeed/comm/comm.py in broadcast(tensor, src, group, async_op, prof, log_name, debug)\r\n 230 debug=get_caller_func()):\r\n 231 global cdb\r\n--> 232 return cdb.broadcast(tensor=tensor, src=src, group=group, async_op=async_op)\r\n 233 \r\n 234 \r\n\r\n/usr/local/lib/python3.8/dist-packages/deepspeed/comm/torch.py in broadcast(self, tensor, src, group, async_op)\r\n 68 \r\n 69 def broadcast(self, tensor, src, group=None, async_op=False):\r\n---> 70 return torch.distributed.broadcast(tensor=tensor,\r\n 71 src=src,\r\n 72 group=group,\r\n\r\n/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py in broadcast(tensor, src, group, async_op)\r\n 1402 group_src_rank = get_group_rank(group, src)\r\n 1403 opts.rootRank = group_src_rank\r\n-> 1404 work = group.broadcast([tensor], opts)\r\n 1405 if async_op:\r\n 1406 return work\r\n\r\nRuntimeError: Tensors must be contiguous\r\n```",
"Excellent. Thank you for providing the full traceback, @FahriBilici \r\n\r\nAs you can see the issue comes from inside deepspeed and is unrelated to the fix I made earlier even though the error message is the same. Therefore you want to report it here https://github.com/microsoft/DeepSpeed/issues\r\n\r\nAlternatively, you can traverse your model before you pass it to the Trainer and ensure that all tensors are contiguous.\r\n\r\nProbably something along the lines of:\r\n```\r\nfor p in model.parameters():\r\n p = p.contiguous()\r\n```\r\n\r\nI haven't tested it, this is just an idea to try.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@stas00 @ArthurZucker @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am just trying to fine-tune "EleutherAI/gpt-neo-1.3B" for casuallm on google colab. Without anything, it gives out of memory error. I was checking what can I do and I found deepspeed. I added deepspeed='ds_config.json', to my training arguments in jupyter notebook and used configuration from the official page which is "ds_config_zero2.json".
### Expected behavior
start training.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21183/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21182
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21182/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21182/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21182/events
|
https://github.com/huggingface/transformers/issues/21182
| 1,538,604,669
|
I_kwDOCUB6oc5btT59
| 21,182
|
Exclude the parameters with `requires_grad=False` in the `Trainer` optimizer.
|
{
"login": "avsolatorio",
"id": 3009596,
"node_id": "MDQ6VXNlcjMwMDk1OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3009596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avsolatorio",
"html_url": "https://github.com/avsolatorio",
"followers_url": "https://api.github.com/users/avsolatorio/followers",
"following_url": "https://api.github.com/users/avsolatorio/following{/other_user}",
"gists_url": "https://api.github.com/users/avsolatorio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avsolatorio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avsolatorio/subscriptions",
"organizations_url": "https://api.github.com/users/avsolatorio/orgs",
"repos_url": "https://api.github.com/users/avsolatorio/repos",
"events_url": "https://api.github.com/users/avsolatorio/events{/privacy}",
"received_events_url": "https://api.github.com/users/avsolatorio/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Sounds like a welcome change!"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### Feature request
Attempt to optimize the training for models with weights/parameters that are set to `requires_grad=False`. This is done by excluding these parameters in the optimizer.
### Motivation
I am building a Seq2Seq model where I use a pre-trained model for the encoder. I freeze all the parameters of the encoder by setting `requires_grad=False`. I expected the training to speed up compared to a model where both the encoder and decoder weights are trainable. However, I found that there's no difference in speed and also memory.
I investigated a bit and found that all the model parameters, regardless of whether gradients are required to be computed, are included in the optimizer https://github.com/huggingface/transformers/blob/00ba7cadd812437708b380ab078a3cfe8cfaff31/src/transformers/trainer.py#L1021-L1030
I tested an idea and subclassed the `Seq2SeqTrainer`. So, I updated the above snippet with this:
```Python
optimizer_grouped_parameters = [
{
# Add here the `p.requires_grad` condition
"params": [p for n, p in opt_model.named_parameters() if (n in decay_parameters and p.requires_grad)],
"weight_decay": self.args.weight_decay,
},
{
# Add here the `p.requires_grad` condition
"params": [p for n, p in opt_model.named_parameters() if (n not in decay_parameters and p.requires_grad)],
"weight_decay": 0.0,
},
]
```
Doing this actually improved both the speed and the memory during the training.
I was wondering if this is something we can add to the codebase. If not, I am curious as to why we shouldn't exclude the parameters that are intended not to be trainable in the optimizer.
### Your contribution
I can make the PR if this is an acceptable change. 🤗
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21182/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21181
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21181/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21181/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21181/events
|
https://github.com/huggingface/transformers/pull/21181
| 1,538,602,251
|
PR_kwDOCUB6oc5HqG0q
| 21,181
|
Updates to computer vision section of the Preprocess doc
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
This PR expands the Computer Vision section of the Preprocess doc to include a small explainer on the difference between image augmentation and image preprocessing, and what ImageProcessor handles. It also refactors the code example to use `ImageProcessor` for normalizing and converting images to tensors instead of `torch.transforms`.
It mentions padding for certain cases (DETR), and the availability of post-processing methods for some models/tasks.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21181/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21181",
"html_url": "https://github.com/huggingface/transformers/pull/21181",
"diff_url": "https://github.com/huggingface/transformers/pull/21181.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21181.patch",
"merged_at": 1674135817000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21180
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21180/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21180/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21180/events
|
https://github.com/huggingface/transformers/issues/21180
| 1,538,585,190
|
I_kwDOCUB6oc5btPJm
| 21,180
|
How to use Ipex via transformers
|
{
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such question as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
NONE
| null |
### Feature request
I saw you implemented Ipex. intel cpu optimisation but there is no any example how to use it via python. Only this example is available:
python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--use_ipex \
--jit_mode
And this is not very useful for me as I do interface via python script., not via command lines.
How would i use that here for example:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
### Motivation
documentation
### Your contribution
documentation
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21180/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21179
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21179/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21179/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21179/events
|
https://github.com/huggingface/transformers/issues/21179
| 1,538,534,300
|
I_kwDOCUB6oc5btCuc
| 21,179
|
Whisper model adds "!" char in the beginning of each predicted audio transcription
|
{
"login": "navalnica",
"id": 29257108,
"node_id": "MDQ6VXNlcjI5MjU3MTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/29257108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/navalnica",
"html_url": "https://github.com/navalnica",
"followers_url": "https://api.github.com/users/navalnica/followers",
"following_url": "https://api.github.com/users/navalnica/following{/other_user}",
"gists_url": "https://api.github.com/users/navalnica/gists{/gist_id}",
"starred_url": "https://api.github.com/users/navalnica/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/navalnica/subscriptions",
"organizations_url": "https://api.github.com/users/navalnica/orgs",
"repos_url": "https://api.github.com/users/navalnica/repos",
"events_url": "https://api.github.com/users/navalnica/events{/privacy}",
"received_events_url": "https://api.github.com/users/navalnica/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@ArthurZucker \r\n\r\nCould it be the timestamps PR ? We did change the `force_bos_token_ids` but only when we want timestamps, right ?",
"I think it related to that indeed, I'll have a look!",
"Okay so the `WhisperTimestampProcessor` is always added to the list of logit processors. This is the cause of the error 😉 \r\nSee this script were I added \r\n```python \r\n return_timestamps = generate_kwargs.pop(\"return_timestamps\", False)\r\n tokens = self.model.generate(\r\n input_features=model_inputs.pop(\"input_features\"),\r\n logits_processor=[WhisperTimeStampLogitsProcessor()] if return_timestamps else None,\r\n **generate_kwargs,\r\n )\r\n```\r\nin the `_forward` call of pipeline. \r\n\r\n```python \r\nfrom transformers import pipeline\r\nfrom datasets import load_dataset\r\n\r\nlibri = load_dataset(\"librispeech_asr\", f\"clean\", split=\"test\", cache_dir=\"/home/arthur_huggingface_co/.cache/huggingface/datasets\")\r\n\r\npipe = pipeline(\r\n task=\"automatic-speech-recognition\",\r\n model='openai/whisper-tiny',\r\n chunk_length_s=8, stride_length_s=1, device=0,\r\n)\r\n\r\npipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(\r\n language='fr', task='transcribe'\r\n)\r\n\r\nres = pipe(libri[0][\"audio\"][\"array\"], return_timestamps=False)\r\n```",
"Two possible fixes : \r\n- both the forward and the initialisation should be consistent. So the `return_timestamp` arg should be added to the `self.args`. \r\n- Just add this to the generation config or the parameters of the _forward call as I did. \r\nWDYT @Narsil ",
"What about just keeping `return_timestamps` and just sending it to both `preprocess` and `_forward` ?\r\n\r\nIt seems odd to push it into `generate_kwargs` if the generation doesn't care about it (directly I mean)"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
Google Colab instance with Tesla T4
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi @gante @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here is the link to [Google Colab notebook](https://colab.research.google.com/drive/1VrcO9BW4OSEm8jmQjvzZ8vikMLsX5n3n?usp=sharing)
```python
!pip install git+https://github.com/huggingface/transformers
from transformers import pipeline
pipe = pipeline(
task="automatic-speech-recognition",
model='ales/whisper-small-belarusian',
chunk_length_s=8, stride_length_s=1, device=0,
)
pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(
language='be', task='transcribe'
)
# run with transformers installed from latest commit: 00ba7cadd812437708b380ab078a3cfe8cfaff31 at the moment.
# all transcription have extra "!" in the beginning!
res = pipe('audio_sample.ogg')
res2 = pipe('audio_sample2.ogg')
print(res)
print(res2)
```
output:
```
{'text': '!Хацеў бы спаткацца з вамі на вуліцу ціхаю зорнаю ночы і сказаць, ці бачыце гэтыя зоркі, ясныя зоркі, іграб лес.!'}
{'text': '!Прывітанне, як вашыя справы.'}
```
### Expected behavior
The problem is when using `ales/whisper-small-belarusian` Whisper model that I've fine-tuned from `openai/whisper-small` each transcription that model now produces starts with an exclamation mark ("!"). This look like an error in model decoding.
* The problem occured when I upgraded my environment. I use `git+https://github.com/huggingface/transformers` as a version specifier to install `transformers` from source. The reason to install `transformers` from source is that current latest release `v4.25.1` does not have needed functionality (e.g. `WhisperTokenizer` class does not have `get_decoder_prompt_ids` method)
* Current latest commit in the `transformers` repository is `00ba7cadd812437708b380ab078a3cfe8cfaff31`
* When I deleted my Colab runtime and created a new one, now installing `transformers` from an older commit `a081f292ca8479eaf66d7396186021268f128829`, transcriptions returned to normal - no exclamation mark in the beginning.
* I guess this error was introduced by some latest Pull requests after `a081f292ca8479eaf66d7396186021268f128829` commit
Compare 2 transcriptions (examples could be found above and in the [Google Colab notebook](https://colab.research.google.com/drive/1VrcO9BW4OSEm8jmQjvzZ8vikMLsX5n3n?usp=sharing))
* Transcription with `transformers` installed from the latest commit (`00ba7cadd812437708b380ab078a3cfe8cfaff31`):
```
{'text': '!Прывітанне, як вашыя справы.'}
```
* Transcription if we use older commit (`a081f292ca8479eaf66d7396186021268f128829`):
```
{'text': 'Прывітанне, як вашыя справы.'}
```
And this happens to any audiofile I pass to the pipeline.
Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21179/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21178
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21178/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21178/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21178/events
|
https://github.com/huggingface/transformers/pull/21178
| 1,538,527,431
|
PR_kwDOCUB6oc5Hp2BA
| 21,178
|
Add disclaimer for necessary fake models
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
This PR adds a disclaimer for the fake models we can't really remove since the canonical checkpoint is very big already. You can see the result in the [preview](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21178/en/model_doc/gptj#transformers.GPTJModel.forward.example).
If this is acceptable, I'll add it to the other places where we want a tiny random model instead of the huge one for the docstrings.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21178/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21178",
"html_url": "https://github.com/huggingface/transformers/pull/21178",
"diff_url": "https://github.com/huggingface/transformers/pull/21178.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21178.patch",
"merged_at": 1674155776000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21177
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21177/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21177/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21177/events
|
https://github.com/huggingface/transformers/pull/21177
| 1,538,400,318
|
PR_kwDOCUB6oc5HpZ08
| 21,177
|
Rewrite a couple of lines in the TF XLA doc
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"(Going to assume this one is small enough to merge without @sgugger approval, but feel free to yell at me if it wasn't!)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
MEMBER
| null |
This PR makes a quick edit at the top of the TF XLA doc to clarify that for training/inference you can just pass `jit_compile` to `model.compile()`! (cc @sayakpaul )
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21177/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21177",
"html_url": "https://github.com/huggingface/transformers/pull/21177",
"diff_url": "https://github.com/huggingface/transformers/pull/21177.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21177.patch",
"merged_at": 1674064386000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21176
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21176/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21176/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21176/events
|
https://github.com/huggingface/transformers/issues/21176
| 1,538,359,588
|
I_kwDOCUB6oc5bsYEk
| 21,176
|
FlaxGPTNeoForCausalLM not working properly with fp16 when using left padding.
|
{
"login": "T-Almeida",
"id": 19167453,
"node_id": "MDQ6VXNlcjE5MTY3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/19167453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/T-Almeida",
"html_url": "https://github.com/T-Almeida",
"followers_url": "https://api.github.com/users/T-Almeida/followers",
"following_url": "https://api.github.com/users/T-Almeida/following{/other_user}",
"gists_url": "https://api.github.com/users/T-Almeida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/T-Almeida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/T-Almeida/subscriptions",
"organizations_url": "https://api.github.com/users/T-Almeida/orgs",
"repos_url": "https://api.github.com/users/T-Almeida/repos",
"events_url": "https://api.github.com/users/T-Almeida/events{/privacy}",
"received_events_url": "https://api.github.com/users/T-Almeida/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @T-Almeida 👋 To be candid with you, I've never played with Flax + generate + fp16, so I can't confirm whether it is a model, generate, or flax issue without a deep dive :) In any case, I can tell you that we've stopped development on Flax, and that is why you won't see newer features there (such as the left padding warning). \r\n\r\nWith decoder-only models, you must use left padding. Otherwise, the first generated token will take `<PAD>` as the previous token and will return different results (the results you get with right padding are slightly different if you pay attention).\r\n\r\n______________________________________\r\n\r\n@sanchit-gandhi nvm, I've located the bug. Numerical masking strikes again!\r\n\r\n(OLD: TL;DR before going into debug mode, a quick check with you :D Flax + generate + fp16 on GPTNeo returns gibberish, where fp32 works fine. Have you seen anything similar before? There is also a chance that the example makes incorrect use of fp16.)\r\n",
"Hi @gante ty for the feedback, you are right with right padding the outputs are indeed different (my bad). Sadly, I also notice that the Flax generate API is, indeed, the one that lacks more features when compared with torch or tf (specially the contrastive_search method, that I can only use in torch, because there is no TFGPTneo... implementation :'( )\r\n\r\nRelated to this issue, when using left padding with fp16 the model is outputting nan logits when predicting the next token, check the below snippet:\r\n\r\n```python\r\nouts = jax_model(input_ids, attention_mask=attention_mask).logits\r\n\r\nDeviceArray([[[nan, nan, nan, ..., nan, nan, nan],\r\n [nan, nan, nan, ..., nan, nan, nan],\r\n [nan, nan, nan, ..., nan, nan, nan],\r\n ...,\r\n [nan, nan, nan, ..., nan, nan, nan],\r\n [nan, nan, nan, ..., nan, nan, nan],\r\n [nan, nan, nan, ..., nan, nan, nan]],\r\n\r\n [[nan, nan, nan, ..., nan, nan, nan],\r\n [nan, nan, nan, ..., nan, nan, nan],\r\n [nan, nan, nan, ..., nan, nan, nan],\r\n ...,\r\n [nan, nan, nan, ..., nan, nan, nan],\r\n [nan, nan, nan, ..., nan, nan, nan],\r\n [nan, nan, nan, ..., nan, nan, nan]]], dtype=float16)\r\n```\r\n\r\nSo, the issue should be in the model call and not in the generation?",
"@T-Almeida thank you for your comment! It turns out that I've seen this pattern of `nan` this week, and the same fix applies here. Will open a PR soon ;)\r\n\r\nRe TFGPTNeo, we are always open to contributions 🙏 I'd be happy to guide anyone that'd like to contribute!",
"Hey @T-Almeida,\r\n\r\nSounds like you've done a good job at assimilating the different Flax dtype terms (which isn't straightforward)! And cool to see that you're running JAX on GPU!\r\n\r\nAs you've correctly specified, `to_fp16()` will convert all the Flax model params to float16, but will leave the computations untouched (i.e. they remain in float32 precision). We need to specify `dtype=jnp.float16` to ensure our forward pass is also done in float16 precision.\r\n\r\nLooks like @gante is on the case with fixing the Flax attention masks (which seems to be the problem here)!",
"@T-Almeida merged, if you install the development version of transformers it should work :)",
"I can confirm that now it is working properly! Thanks a lot @gante, @sanchit-gandhi for the really quick fix!\r\n\r\n"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-01-18 15:47:59.442290: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.3 (gpu)
- Jax version: 0.3.25
- JaxLib version: 0.3.25
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hello there, I am having a bit of trouble to successfully generate text (beam search) with the FlaxGPTNeoForCausalLM model when using fp16.
I provide two colab notebooks to replicate this issue:
- torch version, which works fine both on fp32 and fp16: https://colab.research.google.com/drive/15Fy3VmTfUVGGGC1NAGP8p_DqZajDzxZk?usp=sharing
- flax version, which fails on fp16: https://colab.research.google.com/drive/1t588H8_1SGSj6g1yVXgkeRiIvsxQiOKA?usp=sharing
Very briefly in torch I am converting the model to fp16 by doing this: `GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M", torch_dtype=torch.float16)`, while in Flax I am doing the following:
```python
jax_model = FlaxGPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M", dtype=jax.numpy.float16)
jax_model.params = jax_model.to_fp16(jax_model.params)
```
For both cases, I am using the following sentences as input **with** left padding:
```
texts = ["My name is Julien and I like to", "Why float16 is giving such a strange outputs?"]
```
Output of the torch version:
```python
['<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>My name is Julien and I like to call you Julien. I like to call you Julien. I like to call you Julien. I like to call you Julien. I like to call you Julien. I like to call you Julien. I like to call you',
'<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>Why float16 is giving such a strange outputs?\n\nA:\n\nfloat16 is giving such a strange outputs?\n\nYes, it does.\n\nA:\n\nYes, it does.\n\nA:\n\nYes, it does.\n\nA:\n\n']
```
Output of the flax version:
```python
['<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>My name is Julien and I like to!<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>',
'<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>Why float16 is giving such a strange outputs?!<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>']
```
As you can see, in the case of flax I will always get the `!` token (which corresponds to id:0) and then I will get the `<|endoftext|>` token (id: 50256). Strangely if I dont do the left padding and process each sentence individually I will get the same output as the torch version.
### Expected behavior
Basically, I want to have the equivalent of `GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M", torch_dtype=torch.float16)` but in Flax. So, from the [docs](https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained.dtype), I get that using `to_fp16()` will convert the model params to fp16 and changing the dtype to `jnp.float16` will force the computation to be in fp16. However, when I set `dtype=jnp.float16` and using left padding, the generation does not work properly. If instead, I just use the `to_fp16()` to convert the params and leave `dtype=jnp.float32`, the code works properly, but it is two times slower than the pytorch version, which means that is not truly fp16.
I also want to add that this issue only seems to appear when I add left padding to the inputs in Flax.
Any idea why is this happening?
P.S. I am also not sure if what I am doing is correct, but I couldn't find anything similar to this issue.
**UPDATE**
I also noticed that in the pytorch version, if I use the default padding behaviour (right padding) I get the following warning, which **does not appear** in flax.
```
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
```
So I tried using right padding in the case of Flax and for my surprise, it worked! It gave me the same outputs as the left padded version in torch.
I do not understand if this behaviour is intended or not, but I find it to be a bit confusing, since I believe that the left padding would make more sense.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21176/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21175
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21175/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21175/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21175/events
|
https://github.com/huggingface/transformers/pull/21175
| 1,538,338,014
|
PR_kwDOCUB6oc5HpMUF
| 21,175
|
Fix `Mask2FormerForUniversalSegmentation` and failed tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
For `Mask2FormerForUniversalSegmentation` , the tests `test_torchscript_xxx` fails due to `auxiliary_logits` in `Mask2FormerForUniversalSegmentationOutput`. This is a `list of dict of tensors`, which is not supported by torchscript tracing.
The related tests will pass if we don't output this value. Currently, we have
```python
output_auxiliary_logits = (
self.config.use_auxiliary_loss if output_auxiliary_logits is None else output_auxiliary_logits
)
```
where `use_auxiliary_loss` is `True`. However, this seems strange to me, and I believe it should be `self.config.output_auxiliary_logits` instead. So I change it.
And this also fixes the related tests too, as `output_auxiliary_logits` is `None` (during the tests)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21175/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21175",
"html_url": "https://github.com/huggingface/transformers/pull/21175",
"diff_url": "https://github.com/huggingface/transformers/pull/21175.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21175.patch",
"merged_at": 1674119708000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21174
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21174/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21174/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21174/events
|
https://github.com/huggingface/transformers/pull/21174
| 1,538,288,309
|
PR_kwDOCUB6oc5HpBww
| 21,174
|
Bump torch from 1.6.0 to 1.13.1 in /examples/research_projects/lxmert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21174). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Bumps [torch](https://github.com/pytorch/pytorch) from 1.6.0 to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Arm® Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pytorch/pytorch/commit/49444c3e546bf240bed24a101e747422d1f8a0ee"><code>49444c3</code></a> [BE] Do not package caffe2 in wheel (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87986">#87986</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90433">#90433</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/56de8a39c595777f35e342a7cde9d602d57cca32"><code>56de8a3</code></a> Add manual cuda deps search logic (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90411">#90411</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90426">#90426</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/a4d16e0fb670246f18d8c07396808cd5e3766f0b"><code>a4d16e0</code></a> Fix ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90104">#90104</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/80abad3e7460415efe480ab21c1d5c90fc345a27"><code>80abad3</code></a> Handle Tensor.<strong>deepcopy</strong> via clone(), on IPU (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89129">#89129</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89999">#89999</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/73a852acd7946dff8beb818ec723ffa453e7b242"><code>73a852a</code></a> [Release only change] Fix rocm5.1.1 docker image (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90321">#90321</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/029ec163f2b3a7c46ccb3e8d8b377c9319db463a"><code>029ec16</code></a> Add platform markers for linux only extra_install_requires (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88826">#88826</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89924">#89924</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/197c5c0b849cfdb4f6844f90c49bb8adba85e1bb"><code>197c5c0</code></a> Fix cuda/cpu check on NoneType (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90068">#90068</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aadbeb7416e20a9be694f1da415626135c5c1097"><code>aadbeb7</code></a> Make TorchElastic timer importable on Windows (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88522">#88522</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90045">#90045</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aa9443306a3ba6e8412e24dd99d17eab3f90e818"><code>aa94433</code></a> Mark IPU device as not supports_as_strided (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89130">#89130</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89998">#89998</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/59b4f3be3bd073b1243e20284fbd09ff43bc66f5"><code>59b4f3b</code></a> Use the Python frame safely in _pythonCallstack (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89997">#89997</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pytorch/pytorch/compare/v1.6.0...v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21174/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21174",
"html_url": "https://github.com/huggingface/transformers/pull/21174",
"diff_url": "https://github.com/huggingface/transformers/pull/21174.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21174.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21173
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21173/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21173/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21173/events
|
https://github.com/huggingface/transformers/pull/21173
| 1,538,288,304
|
PR_kwDOCUB6oc5HpBwt
| 21,173
|
Bump future from 0.18.2 to 0.18.3 in /examples/research_projects/visual_bert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21173). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Bumps [future](https://github.com/PythonCharmers/python-future) from 0.18.2 to 0.18.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/PythonCharmers/python-future/releases">future's releases</a>.</em></p>
<blockquote>
<h2>v0.18.3</h2>
<p>This is a minor bug-fix release containing a number of fixes:</p>
<ul>
<li>Backport fix for bpo-38804 (c91d70b)</li>
<li>Fix bug in fix_print.py fixer (dffc579)</li>
<li>Fix bug in fix_raise.py fixer (3401099)</li>
<li>Fix newint bool in py3 (fe645ba)</li>
<li>Fix bug in super() with metaclasses (6e27aac)</li>
<li>docs: fix simple typo, reqest -> request (974eb1f)</li>
<li>Correct <strong>eq</strong> (c780bf5)</li>
<li>Pass if lint fails (2abe00d)</li>
<li>Update docker image and parcel out to constant variable. Add comment to update version constant (45cf382)</li>
<li>fix order (f96a219)</li>
<li>Add flake8 to image (046ff18)</li>
<li>Make lint.sh executable (58cc984)</li>
<li>Add docker push to optimize CI (01e8440)</li>
<li>Build System (42b3025)</li>
<li>Add docs build status badge to README.md (3f40bd7)</li>
<li>Use same docs requirements in tox (18ecc5a)</li>
<li>Add docs/requirements.txt (5f9893f)</li>
<li>Add PY37_PLUS, PY38_PLUS, and PY39_PLUS (bee0247)</li>
<li>fix 2.6 test, better comment (ddedcb9)</li>
<li>fix 2.6 test (3f1ff7e)</li>
<li>remove nan test (4dbded1)</li>
<li>include list test values (e3f1a12)</li>
<li>fix other python2 test issues (c051026)</li>
<li>fix missing subTest (f006cad)</li>
<li>import from old imp library on older python versions (fc84fa8)</li>
<li>replace fstrings with format for python 3.4,3.5 (4a687ea)</li>
<li>minor style/spelling fixes (8302d8c)</li>
<li>improve cmp function, add unittest (0d95a40)</li>
<li>Pin typing==3.7.4.1 for Python 3.3 compatiblity (1a48f1b)</li>
<li>Fix various py26 unit test failures (9ca5a14)</li>
<li>Add initial contributing guide with docs build instruction (e55f915)</li>
<li>Add docs building to tox.ini (3ee9e7f)</li>
<li>Support NumPy's specialized int types in builtins.round (b4b54f0)</li>
<li>Added r""" to the docstring to avoid warnings in python3 (5f94572)</li>
<li>Add <strong>subclasscheck</strong> for past.types.basestring (c9bc0ff)</li>
<li>Correct example in README (681e78c)</li>
<li>Add simple documentation (6c6e3ae)</li>
<li>Add pre-commit hooks (a9c6a37)</li>
<li>Handling of <strong>next</strong> and next by future.utils.get_next was reversed (52b0ff9)</li>
<li>Add a test for our fix (461d77e)</li>
<li>Compare headers to correct definition of str (3eaa8fd)</li>
<li><a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/322">#322</a> Add support for negative ndigits in round; additionally, fixing a bug so that it handles passing in Decimal properly (a4911b9)</li>
<li>Add tkFileDialog to future.movers.tkinter (f6a6549)</li>
<li>Sort before comparing dicts in TestChainMap (6126997)</li>
<li>Fix typo (4dfa099)</li>
<li>Fix formatting in "What's new" (1663dfa)</li>
<li>Fix typo (4236061)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/PythonCharmers/python-future/commit/af1db970b0879b59e7aeb798c27a623144561cff"><code>af1db97</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/613">#613</a> from PythonCharmers/lwan/0.18.3-release</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/079ee9b75441d36447cec9981fa1b0032862f64d"><code>079ee9b</code></a> Prepare for 0.18.3 release</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/02f7a8143d5b68f50a1cca44d8f5a58c1925a515"><code>02f7a81</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/610">#610</a> from wshanks/wshanks-patch-1</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/c91d70b34ef0402aef3e9d04364ba98509dca76f"><code>c91d70b</code></a> Backport fix for bpo-38804</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/80523f383fbba1c6de0551e19d0277e73e69573c"><code>80523f3</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/569">#569</a> from jmadler/master</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/5e5af71549c7a7fa0e28f881046e081e231e455d"><code>5e5af71</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/582">#582</a> from r3m0t/patch-6</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/17e4bbd7c676a9a8efd20601e51675c95f74b330"><code>17e4bbd</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/596">#596</a> from abjonnes/fix-print-trailing-comma</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/1b427ba70191927706282840835e31ae0733ee7e"><code>1b427ba</code></a> Merge branch 'xZise-official-count' into master</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/c8eb497336c76d300c6753b47c7f5de505660d7a"><code>c8eb497</code></a> Merge branch 'official-count' of <a href="https://github.com/xZise/python-future">https://github.com/xZise/python-future</a> into ...</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/dffc579dbb7c882fc01fa0c0dfa6b59acef7827d"><code>dffc579</code></a> Fix bug in fix_print.py fixer</li>
<li>Additional commits viewable in <a href="https://github.com/PythonCharmers/python-future/compare/v0.18.2...v0.18.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21173/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21173",
"html_url": "https://github.com/huggingface/transformers/pull/21173",
"diff_url": "https://github.com/huggingface/transformers/pull/21173.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21173.patch",
"merged_at": 1674058656000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21172
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21172/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21172/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21172/events
|
https://github.com/huggingface/transformers/pull/21172
| 1,538,288,182
|
PR_kwDOCUB6oc5HpBvC
| 21,172
|
Bump torch from 1.6.0 to 1.13.1 in /examples/research_projects/visual_bert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21172). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Bumps [torch](https://github.com/pytorch/pytorch) from 1.6.0 to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Arm® Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pytorch/pytorch/commit/49444c3e546bf240bed24a101e747422d1f8a0ee"><code>49444c3</code></a> [BE] Do not package caffe2 in wheel (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87986">#87986</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90433">#90433</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/56de8a39c595777f35e342a7cde9d602d57cca32"><code>56de8a3</code></a> Add manual cuda deps search logic (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90411">#90411</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90426">#90426</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/a4d16e0fb670246f18d8c07396808cd5e3766f0b"><code>a4d16e0</code></a> Fix ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90104">#90104</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/80abad3e7460415efe480ab21c1d5c90fc345a27"><code>80abad3</code></a> Handle Tensor.<strong>deepcopy</strong> via clone(), on IPU (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89129">#89129</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89999">#89999</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/73a852acd7946dff8beb818ec723ffa453e7b242"><code>73a852a</code></a> [Release only change] Fix rocm5.1.1 docker image (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90321">#90321</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/029ec163f2b3a7c46ccb3e8d8b377c9319db463a"><code>029ec16</code></a> Add platform markers for linux only extra_install_requires (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88826">#88826</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89924">#89924</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/197c5c0b849cfdb4f6844f90c49bb8adba85e1bb"><code>197c5c0</code></a> Fix cuda/cpu check on NoneType (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90068">#90068</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aadbeb7416e20a9be694f1da415626135c5c1097"><code>aadbeb7</code></a> Make TorchElastic timer importable on Windows (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88522">#88522</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90045">#90045</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aa9443306a3ba6e8412e24dd99d17eab3f90e818"><code>aa94433</code></a> Mark IPU device as not supports_as_strided (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89130">#89130</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89998">#89998</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/59b4f3be3bd073b1243e20284fbd09ff43bc66f5"><code>59b4f3b</code></a> Use the Python frame safely in _pythonCallstack (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89997">#89997</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pytorch/pytorch/compare/v1.6.0...v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21172/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21172",
"html_url": "https://github.com/huggingface/transformers/pull/21172",
"diff_url": "https://github.com/huggingface/transformers/pull/21172.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21172.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21171
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21171/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21171/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21171/events
|
https://github.com/huggingface/transformers/pull/21171
| 1,538,288,065
|
PR_kwDOCUB6oc5HpBtb
| 21,171
|
Bump torch from 1.11.0 to 1.13.1 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Bumps [torch](https://github.com/pytorch/pytorch) from 1.11.0 to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Arm® Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pytorch/pytorch/commit/49444c3e546bf240bed24a101e747422d1f8a0ee"><code>49444c3</code></a> [BE] Do not package caffe2 in wheel (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87986">#87986</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90433">#90433</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/56de8a39c595777f35e342a7cde9d602d57cca32"><code>56de8a3</code></a> Add manual cuda deps search logic (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90411">#90411</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90426">#90426</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/a4d16e0fb670246f18d8c07396808cd5e3766f0b"><code>a4d16e0</code></a> Fix ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90104">#90104</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/80abad3e7460415efe480ab21c1d5c90fc345a27"><code>80abad3</code></a> Handle Tensor.<strong>deepcopy</strong> via clone(), on IPU (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89129">#89129</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89999">#89999</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/73a852acd7946dff8beb818ec723ffa453e7b242"><code>73a852a</code></a> [Release only change] Fix rocm5.1.1 docker image (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90321">#90321</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/029ec163f2b3a7c46ccb3e8d8b377c9319db463a"><code>029ec16</code></a> Add platform markers for linux only extra_install_requires (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88826">#88826</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89924">#89924</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/197c5c0b849cfdb4f6844f90c49bb8adba85e1bb"><code>197c5c0</code></a> Fix cuda/cpu check on NoneType (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90068">#90068</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aadbeb7416e20a9be694f1da415626135c5c1097"><code>aadbeb7</code></a> Make TorchElastic timer importable on Windows (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88522">#88522</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90045">#90045</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aa9443306a3ba6e8412e24dd99d17eab3f90e818"><code>aa94433</code></a> Mark IPU device as not supports_as_strided (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89130">#89130</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89998">#89998</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/59b4f3be3bd073b1243e20284fbd09ff43bc66f5"><code>59b4f3b</code></a> Use the Python frame safely in _pythonCallstack (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89997">#89997</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pytorch/pytorch/compare/v1.11.0...v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21171/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21171",
"html_url": "https://github.com/huggingface/transformers/pull/21171",
"diff_url": "https://github.com/huggingface/transformers/pull/21171.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21171.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21170
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21170/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21170/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21170/events
|
https://github.com/huggingface/transformers/pull/21170
| 1,538,288,046
|
PR_kwDOCUB6oc5HpBtJ
| 21,170
|
Bump torch from 1.11.0 to 1.13.1 in /examples/research_projects/codeparrot
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21170). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Bumps [torch](https://github.com/pytorch/pytorch) from 1.11.0 to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Arm® Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pytorch/pytorch/commit/49444c3e546bf240bed24a101e747422d1f8a0ee"><code>49444c3</code></a> [BE] Do not package caffe2 in wheel (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87986">#87986</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90433">#90433</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/56de8a39c595777f35e342a7cde9d602d57cca32"><code>56de8a3</code></a> Add manual cuda deps search logic (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90411">#90411</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90426">#90426</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/a4d16e0fb670246f18d8c07396808cd5e3766f0b"><code>a4d16e0</code></a> Fix ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90104">#90104</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/80abad3e7460415efe480ab21c1d5c90fc345a27"><code>80abad3</code></a> Handle Tensor.<strong>deepcopy</strong> via clone(), on IPU (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89129">#89129</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89999">#89999</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/73a852acd7946dff8beb818ec723ffa453e7b242"><code>73a852a</code></a> [Release only change] Fix rocm5.1.1 docker image (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90321">#90321</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/029ec163f2b3a7c46ccb3e8d8b377c9319db463a"><code>029ec16</code></a> Add platform markers for linux only extra_install_requires (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88826">#88826</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89924">#89924</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/197c5c0b849cfdb4f6844f90c49bb8adba85e1bb"><code>197c5c0</code></a> Fix cuda/cpu check on NoneType (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90068">#90068</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aadbeb7416e20a9be694f1da415626135c5c1097"><code>aadbeb7</code></a> Make TorchElastic timer importable on Windows (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88522">#88522</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90045">#90045</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aa9443306a3ba6e8412e24dd99d17eab3f90e818"><code>aa94433</code></a> Mark IPU device as not supports_as_strided (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89130">#89130</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89998">#89998</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/59b4f3be3bd073b1243e20284fbd09ff43bc66f5"><code>59b4f3b</code></a> Use the Python frame safely in _pythonCallstack (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89997">#89997</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pytorch/pytorch/compare/v1.11.0...v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21170/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21170",
"html_url": "https://github.com/huggingface/transformers/pull/21170",
"diff_url": "https://github.com/huggingface/transformers/pull/21170.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21170.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21169
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21169/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21169/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21169/events
|
https://github.com/huggingface/transformers/pull/21169
| 1,538,288,040
|
PR_kwDOCUB6oc5HpBtD
| 21,169
|
Bump future from 0.18.2 to 0.18.3 in /examples/research_projects/lxmert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21169). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Bumps [future](https://github.com/PythonCharmers/python-future) from 0.18.2 to 0.18.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/PythonCharmers/python-future/releases">future's releases</a>.</em></p>
<blockquote>
<h2>v0.18.3</h2>
<p>This is a minor bug-fix release containing a number of fixes:</p>
<ul>
<li>Backport fix for bpo-38804 (c91d70b)</li>
<li>Fix bug in fix_print.py fixer (dffc579)</li>
<li>Fix bug in fix_raise.py fixer (3401099)</li>
<li>Fix newint bool in py3 (fe645ba)</li>
<li>Fix bug in super() with metaclasses (6e27aac)</li>
<li>docs: fix simple typo, reqest -> request (974eb1f)</li>
<li>Correct <strong>eq</strong> (c780bf5)</li>
<li>Pass if lint fails (2abe00d)</li>
<li>Update docker image and parcel out to constant variable. Add comment to update version constant (45cf382)</li>
<li>fix order (f96a219)</li>
<li>Add flake8 to image (046ff18)</li>
<li>Make lint.sh executable (58cc984)</li>
<li>Add docker push to optimize CI (01e8440)</li>
<li>Build System (42b3025)</li>
<li>Add docs build status badge to README.md (3f40bd7)</li>
<li>Use same docs requirements in tox (18ecc5a)</li>
<li>Add docs/requirements.txt (5f9893f)</li>
<li>Add PY37_PLUS, PY38_PLUS, and PY39_PLUS (bee0247)</li>
<li>fix 2.6 test, better comment (ddedcb9)</li>
<li>fix 2.6 test (3f1ff7e)</li>
<li>remove nan test (4dbded1)</li>
<li>include list test values (e3f1a12)</li>
<li>fix other python2 test issues (c051026)</li>
<li>fix missing subTest (f006cad)</li>
<li>import from old imp library on older python versions (fc84fa8)</li>
<li>replace fstrings with format for python 3.4,3.5 (4a687ea)</li>
<li>minor style/spelling fixes (8302d8c)</li>
<li>improve cmp function, add unittest (0d95a40)</li>
<li>Pin typing==3.7.4.1 for Python 3.3 compatiblity (1a48f1b)</li>
<li>Fix various py26 unit test failures (9ca5a14)</li>
<li>Add initial contributing guide with docs build instruction (e55f915)</li>
<li>Add docs building to tox.ini (3ee9e7f)</li>
<li>Support NumPy's specialized int types in builtins.round (b4b54f0)</li>
<li>Added r""" to the docstring to avoid warnings in python3 (5f94572)</li>
<li>Add <strong>subclasscheck</strong> for past.types.basestring (c9bc0ff)</li>
<li>Correct example in README (681e78c)</li>
<li>Add simple documentation (6c6e3ae)</li>
<li>Add pre-commit hooks (a9c6a37)</li>
<li>Handling of <strong>next</strong> and next by future.utils.get_next was reversed (52b0ff9)</li>
<li>Add a test for our fix (461d77e)</li>
<li>Compare headers to correct definition of str (3eaa8fd)</li>
<li><a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/322">#322</a> Add support for negative ndigits in round; additionally, fixing a bug so that it handles passing in Decimal properly (a4911b9)</li>
<li>Add tkFileDialog to future.movers.tkinter (f6a6549)</li>
<li>Sort before comparing dicts in TestChainMap (6126997)</li>
<li>Fix typo (4dfa099)</li>
<li>Fix formatting in "What's new" (1663dfa)</li>
<li>Fix typo (4236061)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/PythonCharmers/python-future/commit/af1db970b0879b59e7aeb798c27a623144561cff"><code>af1db97</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/613">#613</a> from PythonCharmers/lwan/0.18.3-release</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/079ee9b75441d36447cec9981fa1b0032862f64d"><code>079ee9b</code></a> Prepare for 0.18.3 release</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/02f7a8143d5b68f50a1cca44d8f5a58c1925a515"><code>02f7a81</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/610">#610</a> from wshanks/wshanks-patch-1</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/c91d70b34ef0402aef3e9d04364ba98509dca76f"><code>c91d70b</code></a> Backport fix for bpo-38804</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/80523f383fbba1c6de0551e19d0277e73e69573c"><code>80523f3</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/569">#569</a> from jmadler/master</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/5e5af71549c7a7fa0e28f881046e081e231e455d"><code>5e5af71</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/582">#582</a> from r3m0t/patch-6</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/17e4bbd7c676a9a8efd20601e51675c95f74b330"><code>17e4bbd</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/596">#596</a> from abjonnes/fix-print-trailing-comma</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/1b427ba70191927706282840835e31ae0733ee7e"><code>1b427ba</code></a> Merge branch 'xZise-official-count' into master</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/c8eb497336c76d300c6753b47c7f5de505660d7a"><code>c8eb497</code></a> Merge branch 'official-count' of <a href="https://github.com/xZise/python-future">https://github.com/xZise/python-future</a> into ...</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/dffc579dbb7c882fc01fa0c0dfa6b59acef7827d"><code>dffc579</code></a> Fix bug in fix_print.py fixer</li>
<li>Additional commits viewable in <a href="https://github.com/PythonCharmers/python-future/compare/v0.18.2...v0.18.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21169/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21169",
"html_url": "https://github.com/huggingface/transformers/pull/21169",
"diff_url": "https://github.com/huggingface/transformers/pull/21169.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21169.patch",
"merged_at": 1674058604000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21168
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21168/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21168/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21168/events
|
https://github.com/huggingface/transformers/pull/21168
| 1,538,288,038
|
PR_kwDOCUB6oc5HpBtB
| 21,168
|
Bump torch from 1.9.0+cpu to 1.13.1 in /examples/flax/vision
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21168). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Bumps [torch](https://github.com/pytorch/pytorch) from 1.9.0+cpu to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Arm® Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/pytorch/pytorch/commits/v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21168/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21168",
"html_url": "https://github.com/huggingface/transformers/pull/21168",
"diff_url": "https://github.com/huggingface/transformers/pull/21168.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21168.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21167
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21167/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21167/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21167/events
|
https://github.com/huggingface/transformers/pull/21167
| 1,538,288,011
|
PR_kwDOCUB6oc5HpBsr
| 21,167
|
Bump torch from 1.9.0+cpu to 1.13.1 in /examples/research_projects/jax-projects/hybrid_clip
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21167). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Bumps [torch](https://github.com/pytorch/pytorch) from 1.9.0+cpu to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Arm® Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/pytorch/pytorch/commits/v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21167/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21167",
"html_url": "https://github.com/huggingface/transformers/pull/21167",
"diff_url": "https://github.com/huggingface/transformers/pull/21167.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21167.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21166
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21166/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21166/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21166/events
|
https://github.com/huggingface/transformers/pull/21166
| 1,538,050,979
|
PR_kwDOCUB6oc5HoOGf
| 21,166
|
Fix doctest CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Fix doctest CI, caused by a change in #20757. The change in multi-label example has some minor issue. See comments in the review.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21166/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21166",
"html_url": "https://github.com/huggingface/transformers/pull/21166",
"diff_url": "https://github.com/huggingface/transformers/pull/21166.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21166.patch",
"merged_at": 1674057265000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21165
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21165/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21165/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21165/events
|
https://github.com/huggingface/transformers/issues/21165
| 1,537,527,628
|
I_kwDOCUB6oc5bpM9M
| 21,165
|
rag model evaluation program bug
|
{
"login": "AbrahamBob",
"id": 31125842,
"node_id": "MDQ6VXNlcjMxMTI1ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/31125842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AbrahamBob",
"html_url": "https://github.com/AbrahamBob",
"followers_url": "https://api.github.com/users/AbrahamBob/followers",
"following_url": "https://api.github.com/users/AbrahamBob/following{/other_user}",
"gists_url": "https://api.github.com/users/AbrahamBob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AbrahamBob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AbrahamBob/subscriptions",
"organizations_url": "https://api.github.com/users/AbrahamBob/orgs",
"repos_url": "https://api.github.com/users/AbrahamBob/repos",
"events_url": "https://api.github.com/users/AbrahamBob/events{/privacy}",
"received_events_url": "https://api.github.com/users/AbrahamBob/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This example relies on earlier version of Transformers and HuggingFace Hub, you should downgrade them.",
"@sgugger I'm sorry, can you give some advice about the version?I have tried several versions myself without success.",
"It looks like this example was released along with Transformers 3.2.0 or 3.3.0.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
NONE
| null |
### System Info
transformers=4.25.1
huggingface-hub=0.10.1
tokenizers =0.13.2
python=3.7
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'RagTokenizer'.
The class this function is called from is 'BartTokenizerFast'.
Loading passages from https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr
Traceback (most recent call last):
File "/home/nano/transformers/examples/research_projects/rag/eval_rag.py", line 321, in <module>
main(args)
File "/home/nano/transformers/examples/research_projects/rag/eval_rag.py", line 295, in main
retriever = RagRetriever.from_pretrained(checkpoint, **model_kwargs)
File "/home/nano/transformers/src/transformers/models/rag/retrieval_rag.py", line 429, in from_pretrained
index = cls._build_index(config)
File "/home/nano/transformers/src/transformers/models/rag/retrieval_rag.py", line 400, in _build_index
config.index_path or LEGACY_INDEX_PATH,
File "/home/nano/transformers/src/transformers/models/rag/retrieval_rag.py", line 108, in __init__
self.passages = self._load_passages()
File "/home/nano/transformers/src/transformers/models/rag/retrieval_rag.py", line 133, in _load_passages
passages_path = self._resolve_path(self.index_path, self.PASSAGE_FILENAME)
File "/home/nano/transformers/src/transformers/models/rag/retrieval_rag.py", line 117, in _resolve_path
resolved_archive_file = cached_file(index_path, filename)
File "/home/nano/transformers/src/transformers/utils/hub.py", line 420, in cached_file
local_files_only=local_files_only,
File "/home/nano/miniconda3/envs/rag/lib/python3.7/site-packages/huggingface_hub/file_download.py", line 1022, in hf_hub_download
cache_dir, repo_folder_name(repo_id=repo_id, repo_type=repo_type)
File "/home/nano/miniconda3/envs/rag/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py", line 92, in _inner_fn
validate_repo_id(arg_value)
File "/home/nano/miniconda3/envs/rag/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py", line 137, in validate_repo_id
"Repo id must be in the form 'repo_name' or 'namespace/repo_name':"
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr'. Use `repo_type` argument if needed.
### Expected behavior
I run the evaluation program of the rag model, and after adding hyperparameters according to the example, it prompts an error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21165/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21164
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21164/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21164/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21164/events
|
https://github.com/huggingface/transformers/pull/21164
| 1,537,468,646
|
PR_kwDOCUB6oc5HmRsY
| 21,164
|
deleted references of self.vocab_size and self.type_vocab_size for multiple models [TF implementation]
|
{
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @gante as you said [here](https://github.com/huggingface/transformers/pull/21065#issuecomment-1385233073) I made changes for `src/transformers/models/albert/modeling_tf_albert.py`, could you please check it? If it's ok then I will push other changes of other models.\r\n\r\nThe failed test `ci/circleci: check_repository_consistency` are due to the fact that I only changed Embeddings for albert model and it is different from TFBertEmbeddings , the test will be successful when I change them too. ",
"_The documentation is not available anymore as the PR was closed or merged._",
"@susnato before moving forward, let's make sure our `transformers` master agrees with the set of changes we are about to make :)\r\n\r\n_________________________________\r\n@sgugger this PR contains a change I want to apply across TF models, and @susnato is kindly doing the bulk of the work. In essence, some `config` attributes are not constant throughout the model's life, like `config.vocab_size`, and our TF implementation stores them as immutable class attributes (e.g. `self.vocab_size = config.vocab_size`). PT doesn't have this problem, since it simply stores `self.config = config` in the layers, which benefits from the updates the mutable `config` dictionary may receive elsewhere.\r\n\r\nThe proposed change is to make TF implementation closer to PT implementation and store `self.config` in the layers, as opposed to individual configuration parameters. This also solves the bug that triggered this discussion, where the vocabulary size was not being correctly updated and causing exceptions.\r\n\r\nLet us know if you are okay with us making this change over most model architectures 🚀 ",
"@susnato you might need to run `make fixup` locally, to automatically format the code and make our CI happy",
"Hi, @gante I added all the models I found to have self.vocab_size and removed reference to self.vocab_size and self.type_vocab_size and also all the tests are passed! Could you please check it? ",
"> LGTM +1\r\n> \r\n> Can we edit the PR title to a shorter one before merging? sweat_smile\r\n\r\n@gante Done!",
"Awesome! \r\n\r\nThank you for all the work you've put into fixing this, @susnato 🤗"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR deletes references of `self.vocab_size` and `self.type_vocab_size` for these models [Tensorflow implementation] : bert, albert, lxmert, electra, tapas, convbert, layoutlm, gpt2, camembert, clip, ctrl, deberta, deberta_v2, distilbert, esm, funnel, gptj, groupvit, longformer, mobilebert, mpnet, openai, rembert, roberta, roberta_prelayernorm, roformer, xlm_roberta, xlnet.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.([link](https://github.com/huggingface/transformers/issues/21053))
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21164/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21164",
"html_url": "https://github.com/huggingface/transformers/pull/21164",
"diff_url": "https://github.com/huggingface/transformers/pull/21164.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21164.patch",
"merged_at": 1674220261000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21163
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21163/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21163/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21163/events
|
https://github.com/huggingface/transformers/issues/21163
| 1,537,399,151
|
I_kwDOCUB6oc5botlv
| 21,163
|
Output of finetuned facebook/wav2vec2-xls-r-300m model is getting incorrect
|
{
"login": "Shubhambugade09",
"id": 114153974,
"node_id": "U_kgDOBs3Z9g",
"avatar_url": "https://avatars.githubusercontent.com/u/114153974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shubhambugade09",
"html_url": "https://github.com/Shubhambugade09",
"followers_url": "https://api.github.com/users/Shubhambugade09/followers",
"following_url": "https://api.github.com/users/Shubhambugade09/following{/other_user}",
"gists_url": "https://api.github.com/users/Shubhambugade09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shubhambugade09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shubhambugade09/subscriptions",
"organizations_url": "https://api.github.com/users/Shubhambugade09/orgs",
"repos_url": "https://api.github.com/users/Shubhambugade09/repos",
"events_url": "https://api.github.com/users/Shubhambugade09/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shubhambugade09/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @Shubhambugade09,\r\n\r\nCould you please try provide a google colab that easily reproduces your training run or maybe instead use the forum specific fine-tuning questions: https://discuss.huggingface.co/ \r\n\r\nThanks! ",
"Hey @patrickvonplaten\r\n\r\nI have send you the jupyter notebook on [patrick.v.platen@gmail.com](mailto:patrick.v.platen@gmail.com). please check it \r\n\r\nThanks! ",
"Hey @Shubhambugade09, would you mind posting a link to your Colab here so I can review it? It would be awesome to post the link here rather than sending as email so that the discussion remains public, this way helping other people who might be experiencing the same issue 🤗 Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
NONE
| null |
### System Info
Transformers 4.23.1
Pytorch 1.12.1
Datasets 2.4.0
Tokenizers 0.13.2
I have finetuned facebook/wav2vec2-xls-r-300m model on my own dataset. I have cross check my dataset twice is it correct format as like libri speech dataset. I have finetuned my code using reference of @patrickvonplaten
https://huggingface.co/blog/fine-tune-wav2vec2-english

### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
a
### Expected behavior
In our case I am getting result like "THTMTHTET" for IT IS PERMITTED TO TURN THE WHEELPAN BELOW MM WITH THE CONTROLS IN TABLE IT IS PERMITTED TO TURN BELOW THE LAST TURNING' GROOVE
this text we need
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21163/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21162
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21162/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21162/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21162/events
|
https://github.com/huggingface/transformers/pull/21162
| 1,537,372,991
|
PR_kwDOCUB6oc5Hl93s
| 21,162
|
using raw string for regex to search <extra_id>
|
{
"login": "pfliu-nlp",
"id": 59123869,
"node_id": "MDQ6VXNlcjU5MTIzODY5",
"avatar_url": "https://avatars.githubusercontent.com/u/59123869?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pfliu-nlp",
"html_url": "https://github.com/pfliu-nlp",
"followers_url": "https://api.github.com/users/pfliu-nlp/followers",
"following_url": "https://api.github.com/users/pfliu-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/pfliu-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pfliu-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pfliu-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/pfliu-nlp/orgs",
"repos_url": "https://api.github.com/users/pfliu-nlp/repos",
"events_url": "https://api.github.com/users/pfliu-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/pfliu-nlp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Similar to this: https://github.com/huggingface/transformers/pull/21125
This change replaces the regex pattern written in a Unicode string with a raw string for these two files:
* `tokenization_t5.py`
* `test_tokenization_t5.py`
Also checked, and no more:)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Hi, @ArthurZucker @sgugger. Would you be happy to review this PR?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21162/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21162",
"html_url": "https://github.com/huggingface/transformers/pull/21162",
"diff_url": "https://github.com/huggingface/transformers/pull/21162.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21162.patch",
"merged_at": 1674053035000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21161
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21161/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21161/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21161/events
|
https://github.com/huggingface/transformers/issues/21161
| 1,537,337,761
|
I_kwDOCUB6oc5boemh
| 21,161
|
CodeGen Tokenizer Deletes Newline Symbols
|
{
"login": "hellodanylo",
"id": 1928726,
"node_id": "MDQ6VXNlcjE5Mjg3MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1928726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hellodanylo",
"html_url": "https://github.com/hellodanylo",
"followers_url": "https://api.github.com/users/hellodanylo/followers",
"following_url": "https://api.github.com/users/hellodanylo/following{/other_user}",
"gists_url": "https://api.github.com/users/hellodanylo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hellodanylo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hellodanylo/subscriptions",
"organizations_url": "https://api.github.com/users/hellodanylo/orgs",
"repos_url": "https://api.github.com/users/hellodanylo/repos",
"events_url": "https://api.github.com/users/hellodanylo/events{/privacy}",
"received_events_url": "https://api.github.com/users/hellodanylo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi, @hellodanylo this looks like the same issue described in issue #21120, where the tokenizer strips the whitespace and \\n in front and at the end of the sentence. To overcome it use `tokenizer = AutoTokenizer.from_pretrained(\"Salesforce/codegen-350M-multi\")`.",
"cc @ArthurZucker ",
"@susnato thanks, using the fast tokenizer did solve this issue.\r\n\r\nFor future readers, you can get the fast tokenizer using either of the following ways:\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(\"Salesforce/codegen-350M-multi\")\r\ntokenizer = CodeGenTokenizerFast.from_pretrained(\"Salesforce/codegen-350M-multi\")\r\n```\r\n\r\nIf this bug is not specific to CodeGen tokenizer, then we can close this as duplicate of #21120?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,677
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
- Python version: 3.9.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1.post201 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.3 (cpu)
- Jax version: 0.4.1
- JaxLib version: 0.4.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@rooa @patrickvonplaten @patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The CodeGen tokenizer seems to remove the newline symbol in certain scenarios.
In particular, `decode(encode(text))` does not always equal the original `text`.
The following is the smallest example that reproduces the error but other text examples will have this issue as well.
```python
from transformers import CodeGenTokenizer
# other checkpoints in the CodeGen series have the same issue
tokenizer = CodeGenTokenizer.from_pretrained("Salesforce/codegen-350M-multi")
# new line (10), space (32), space (32)
text = "\n "
print([ord(c) for c in text])
# output: [10, 32, 32]
encoded = tokenizer.encode(text)
print(encoded)
# output: [50286]
decoded = tokenizer.decode(encoded)
print([ord(c) for c in decoded])
# actual: [32, 32]
# expected: [10, 32, 32]
```
### Expected behavior
Expected: the decoded string is equal to the original string.
Actual: the decoded string is missing the leading new line symbol.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21161/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21160
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21160/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21160/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21160/events
|
https://github.com/huggingface/transformers/pull/21160
| 1,537,049,793
|
PR_kwDOCUB6oc5Hk6c8
| 21,160
|
Fix typos in documentation
|
{
"login": "jordimas",
"id": 309265,
"node_id": "MDQ6VXNlcjMwOTI2NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/309265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jordimas",
"html_url": "https://github.com/jordimas",
"followers_url": "https://api.github.com/users/jordimas/followers",
"following_url": "https://api.github.com/users/jordimas/following{/other_user}",
"gists_url": "https://api.github.com/users/jordimas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jordimas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jordimas/subscriptions",
"organizations_url": "https://api.github.com/users/jordimas/orgs",
"repos_url": "https://api.github.com/users/jordimas/repos",
"events_url": "https://api.github.com/users/jordimas/events{/privacy}",
"received_events_url": "https://api.github.com/users/jordimas/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Please make sure to run `make style` on your branch so that the quality scripts pass. Thank you!",
"Done, thanks @sgugger "
] | 1,673
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Fixes to typos in documentation: @sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21160/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21160/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21160",
"html_url": "https://github.com/huggingface/transformers/pull/21160",
"diff_url": "https://github.com/huggingface/transformers/pull/21160.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21160.patch",
"merged_at": 1674050726000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21159
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21159/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21159/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21159/events
|
https://github.com/huggingface/transformers/issues/21159
| 1,537,023,347
|
I_kwDOCUB6oc5bnR1z
| 21,159
|
Feature Request: VideoMAEForMaskedVideoModeling
|
{
"login": "will-rice",
"id": 25072137,
"node_id": "MDQ6VXNlcjI1MDcyMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/25072137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/will-rice",
"html_url": "https://github.com/will-rice",
"followers_url": "https://api.github.com/users/will-rice/followers",
"following_url": "https://api.github.com/users/will-rice/following{/other_user}",
"gists_url": "https://api.github.com/users/will-rice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/will-rice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/will-rice/subscriptions",
"organizations_url": "https://api.github.com/users/will-rice/orgs",
"repos_url": "https://api.github.com/users/will-rice/repos",
"events_url": "https://api.github.com/users/will-rice/events{/privacy}",
"received_events_url": "https://api.github.com/users/will-rice/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThis is already available, the class is called [VideoMAEForPreTraining](https://huggingface.co/docs/transformers/model_doc/videomae#transformers.VideoMAEForPreTraining). To reconstruct pixel values, you can load the following model:\r\n```\r\nfrom transformers import VideoMAEForPreTraining\r\n\r\nmodel = VideoMAEForPreTraining.from_pretrained(\"MCG-NJU/videomae-base\")\r\n```\r\n\r\nTo visualize a masked video, you can borrow the code from the [original implementation](https://github.com/MCG-NJU/VideoMAE/blob/45dcd7f10183099669baa481c6d33165842d8bf3/run_videomae_vis.py#L167)."
] | 1,673
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
### Feature request
Basically, it would be nice if we could fill in the masked video.
### Motivation
I doubt I'm the only person that would like to try/train this model for inpainting masked video.
### Your contribution
I guess I could contribute it since I went to the torchside
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21159/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21158
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21158/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21158/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21158/events
|
https://github.com/huggingface/transformers/pull/21158
| 1,537,019,647
|
PR_kwDOCUB6oc5Hk0Bs
| 21,158
|
Adapt repository creation to latest hf_hub
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> It also uses the new token keyword argument instead of use_auth_token,\r\n\r\nIt is indeed good practice to drop `use_auth_token` whenever possible when using `huggingface_hub` (in favor of `token`). Just to mention it, using `use_auth_token` would still work (without deprecation yet) so if you missed some occurrences, it will not break."
] | 1,673
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
This PR adapts the code to push the trained model to the Hub to the latest APIs in `huggingface_hub`. In particular, `Repository` is no longer responsible for the distant repo creation, so this PR switches to the use of `create_repo`. We relied on this behavior in:
- the Trainer
- all PyTorch no_trainer examples
- all Flax examples
- some tests
It also uses the new `token` keyword argument instead of `use_auth_token`, which is the reason for the version bump (`Repository` in v0.10 still expects `use_auth_token`). I updated all the uses I could find to pass from `use_auth_token` to `token` as part of this PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21158/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21158",
"html_url": "https://github.com/huggingface/transformers/pull/21158",
"diff_url": "https://github.com/huggingface/transformers/pull/21158.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21158.patch",
"merged_at": 1674058441000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21157
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21157/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21157/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21157/events
|
https://github.com/huggingface/transformers/pull/21157
| 1,536,890,681
|
PR_kwDOCUB6oc5HkYpk
| 21,157
|
Make `parallelism` for CircleCI jobs work - but keep it to be `1` for now
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for looking into this. As said internally I'm really not a fan of the split test reports this gets us at the end.\r\n\r\nI can probably try to concatenate the failed tests (with details) at the end - although so far I don't have clear idea of the feasibility)\r\n\r\nBut at least we can switch back to N=1 if we really need to (i.e. we have real difficulty to read the test failures)",
"Ready for review again. So far it still uses `parallelism=1` - this PR just provides the necessary change for using `parallelism=N`. Hopefully I can figure out a way to make a better report format in another PR, and we can finally go for N > 1.",
"> Thanks! Another thing to look for before enabling it is if we just pay 8x the price for parallelism=8 and not way more than that.\r\n\r\n@sgugger I think the expectation to check is if we just pay 1x~2x the price for parallelism=N (say 8). Using 8 executors means less runtime on each executor (ideally 1/8), so the total should be the same (but there is definitely some overhead, like in cache loading / pip install steps)"
] | 1,673
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Enable `parallelism` for CircleCI jobs. So far I only enable it for torch/tf/flax jobs. It can be switch off easily.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21157/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21157",
"html_url": "https://github.com/huggingface/transformers/pull/21157",
"diff_url": "https://github.com/huggingface/transformers/pull/21157.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21157.patch",
"merged_at": 1674229293000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21156
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21156/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21156/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21156/events
|
https://github.com/huggingface/transformers/issues/21156
| 1,536,855,554
|
I_kwDOCUB6oc5bmo4C
| 21,156
|
[HF Trainer] [PyTorch FSDP] Add support for backward_prefetch, forward_prefetch, limit_all_gathers
|
{
"login": "cavdard",
"id": 44590949,
"node_id": "MDQ6VXNlcjQ0NTkwOTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/44590949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cavdard",
"html_url": "https://github.com/cavdard",
"followers_url": "https://api.github.com/users/cavdard/followers",
"following_url": "https://api.github.com/users/cavdard/following{/other_user}",
"gists_url": "https://api.github.com/users/cavdard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cavdard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cavdard/subscriptions",
"organizations_url": "https://api.github.com/users/cavdard/orgs",
"repos_url": "https://api.github.com/users/cavdard/repos",
"events_url": "https://api.github.com/users/cavdard/events{/privacy}",
"received_events_url": "https://api.github.com/users/cavdard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @pacman100 ",
"There are also 2 newly added[ sharding strategies](https://pytorch.org/docs/master/fsdp.html#torch.distributed.fsdp.ShardingStrategy) with PyTorch 2.0: `HYBRID_SHARD` and `_HYBRID_SHARD_ZERO2` that can impact performance. ",
"The ^^ above fix only addes backward_prefetch and forward_prefetch options in fsdp, limit_all_gathers and other sharding strategies is not available with current pytorch version used in repo.",
"Hi! These two fixes still doesn't allow the following two strategies\r\n> There are also 2 newly added[ sharding strategies](https://pytorch.org/docs/master/fsdp.html#torch.distributed.fsdp.ShardingStrategy) with PyTorch 2.0: HYBRID_SHARD and _HYBRID_SHARD_ZERO2 that can impact performance.\r\n\r\nWe could potentially enable it by adding two conditions here? https://github.com/huggingface/transformers/blob/118e9810687dd713b6be07af79e80eeb1d916908/src/transformers/trainer.py#L453-L458",
"> Hi! These two fixes still doesn't allow the following two strategies\r\n> \r\n> > There are also 2 newly added[ sharding strategies](https://pytorch.org/docs/master/fsdp.html#torch.distributed.fsdp.ShardingStrategy) with PyTorch 2.0: HYBRID_SHARD and _HYBRID_SHARD_ZERO2 that can impact performance.\r\n> \r\n> We could potentially enable it by adding two conditions here?\r\n> \r\n> https://github.com/huggingface/transformers/blob/118e9810687dd713b6be07af79e80eeb1d916908/src/transformers/trainer.py#L453-L458\r\n\r\nYes, I will open a PR shortly."
] | 1,673
| 1,685
| 1,675
|
CONTRIBUTOR
| null |
### **Feature request**
Can we add Trainer support for the following [FSDP](https://pytorch.org/docs/1.13/fsdp.html?highlight=fsdp#module-torch.distributed.fsdp) features? `backward_prefetch`, `forward_prefetch` and `limit_all_gathers`
### **Motivation**
`backward_prefetch`, `forward_prefetch` and `limit_all_gathers` are important to achieve best performance when training at scale with FSDP.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21156/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21155
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21155/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21155/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21155/events
|
https://github.com/huggingface/transformers/pull/21155
| 1,536,843,075
|
PR_kwDOCUB6oc5HkOi2
| 21,155
|
Update examples with image processors
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This broke the deepspeed tests. Fix is here: https://github.com/huggingface/transformers/pull/21283\r\n\r\nwhen modifying examples and breaking back compat please scan the slow tests and adjust those too. Thank you!\r\n\r\nThe reason this is important is because Deepspeed CI runs our slow deepspeed tests as normal CI so when we break things their CI is broken.",
"@stas00 - thanks for applying a quick fix and apologies about the break. I'll make sure to remember the slow tests next time! ",
"Thank you, Amy!\r\n\r\n`tests/extended/` and `tests/deepspeed` are the ones that heavily rely on `examples/pytorch`\r\n\r\n"
] | 1,673
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Updates all of the feature extractor references to image processors in the `examples` folder.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21155/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21155",
"html_url": "https://github.com/huggingface/transformers/pull/21155",
"diff_url": "https://github.com/huggingface/transformers/pull/21155.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21155.patch",
"merged_at": 1674141298000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21154
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21154/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21154/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21154/events
|
https://github.com/huggingface/transformers/pull/21154
| 1,536,587,813
|
PR_kwDOCUB6oc5HjYLG
| 21,154
|
CLI: update hub PR URL
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"It is already recent enough: the change was added in `v0.10.0` (see `1.` in `v0.10.0` breaking changes), which is our current minimum version"
] | 1,673
| 1,673
| 1,673
|
MEMBER
| null |
# What does this PR do?
Keeps up with hub API changes, and updates the instruction to get the PR URL (which was returning an object)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21154/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21154",
"html_url": "https://github.com/huggingface/transformers/pull/21154",
"diff_url": "https://github.com/huggingface/transformers/pull/21154.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21154.patch",
"merged_at": 1673973407000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21153
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21153/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21153/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21153/events
|
https://github.com/huggingface/transformers/pull/21153
| 1,536,533,697
|
PR_kwDOCUB6oc5HjMmc
| 21,153
|
Change variable name to prevent shadowing
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Since the changes are small, I guess it's good to merge but still confirming. ",
"Yes, one core maintainer approval is enough to merge."
] | 1,673
| 1,673
| 1,673
|
MEMBER
| null |
This PR replaces the `input` variable with `input_string` to prevent shadowing caused by the `input()` function.
Thanks to @LysandreJik for catching it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21153/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21153",
"html_url": "https://github.com/huggingface/transformers/pull/21153",
"diff_url": "https://github.com/huggingface/transformers/pull/21153.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21153.patch",
"merged_at": 1673972963000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21152
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21152/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21152/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21152/events
|
https://github.com/huggingface/transformers/pull/21152
| 1,536,526,000
|
PR_kwDOCUB6oc5HjK_b
| 21,152
|
[DOCTEST] Refactor doctest for simplicity and safety
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21152). All of your documentation changes will be reflected on that endpoint."
] | 1,673
| 1,682
| 1,682
|
COLLABORATOR
| null |
# What does this PR do?
This a a draft PR to simplify the testing of documentation, which would rely on the `doctest` API.
The documentation tests related to any model added will also be run as part of the CI that are not slow, juste to make sure that we fix everything in one go. (and that the contributors have an easier time doing it).
I am also learning about github workflows and CI jobs, which is a good exercice!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21152/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21152",
"html_url": "https://github.com/huggingface/transformers/pull/21152",
"diff_url": "https://github.com/huggingface/transformers/pull/21152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21152.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21151
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21151/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21151/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21151/events
|
https://github.com/huggingface/transformers/issues/21151
| 1,536,499,901
|
I_kwDOCUB6oc5blSC9
| 21,151
|
Contrastive Search in .generate() function doesn't work with Half
|
{
"login": "sam-ulrich1",
"id": 40002776,
"node_id": "MDQ6VXNlcjQwMDAyNzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/40002776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sam-ulrich1",
"html_url": "https://github.com/sam-ulrich1",
"followers_url": "https://api.github.com/users/sam-ulrich1/followers",
"following_url": "https://api.github.com/users/sam-ulrich1/following{/other_user}",
"gists_url": "https://api.github.com/users/sam-ulrich1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sam-ulrich1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sam-ulrich1/subscriptions",
"organizations_url": "https://api.github.com/users/sam-ulrich1/orgs",
"repos_url": "https://api.github.com/users/sam-ulrich1/repos",
"events_url": "https://api.github.com/users/sam-ulrich1/events{/privacy}",
"received_events_url": "https://api.github.com/users/sam-ulrich1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @sam-ulrich1 👋 \r\n\r\nTo be candid, fp16 was not a concern when writing contrastive search :) I've tried adding your suggested change and running the script below, but that was not enough to fix it\r\n\r\n```py\r\nfrom transformers import GPT2Tokenizer, OPTForCausalLM\r\nimport torch\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"facebook/opt-350m\", padding_side='left')\r\nmodel = OPTForCausalLM.from_pretrained(\"facebook/opt-350m\", torch_dtype=torch.float16)\r\n\r\ninputs = tokenizer([\"My cat is\"], return_tensors=\"pt\")\r\n\r\noutputs = model.generate(**inputs, top_k=4, penalty_alpha=0.6)\r\nprint(tokenizer.batch_decode(outputs.sequences))\r\n```\r\n\r\nWould you be able to share a snippet of what you're trying to run? :)",
"Odd! It works on my machine (pun intended)!\n\nLet me get my version and other info and I can make a PR if you'd like. That way we can work from code not just snippets",
"@gante Could you share your traceback? I'll take a look at this later today",
"@sam-ulrich1 haha roles reversed, usually I'm the one asking for tracebacks!\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/joao/transformers/../joao_scripts/dbg.py\", line 17, in <module>\r\n outputs = model.generate(**inputs, top_k=4, penalty_alpha=0.6)\r\n File \"/home/joao/hf/lib/python3.10/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/joao/transformers/src/transformers/generation/utils.py\", line 1372, in generate\r\n return self.contrastive_search(\r\n File \"/home/joao/hf/lib/python3.10/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/joao/transformers/src/transformers/generation/utils.py\", line 1769, in contrastive_search\r\n outputs = self(\r\n File \"/home/joao/hf/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/joao/transformers/src/transformers/models/opt/modeling_opt.py\", line 934, in forward\r\n outputs = self.model.decoder(\r\n File \"/home/joao/hf/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/joao/transformers/src/transformers/models/opt/modeling_opt.py\", line 645, in forward\r\n inputs_embeds = self.project_in(inputs_embeds)\r\n File \"/home/joao/hf/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/joao/hf/lib/python3.10/site-packages/torch/nn/modules/linear.py\", line 114, in forward\r\n return F.linear(input, self.weight, self.bias)\r\nRuntimeError: \"addmm_impl_cpu_\" not implemented for 'Half'\r\n```",
"Ya I got a kick out of that too!\r\n\r\nIt actually looks like that is an OPT issue with Half. I'm playing around with CodeGen so that would be my reference but I know other models are affected as well. Currently the problem I'm targeting is `\"baddbmm_with_gemm\" not implemented for 'Half'`\r\n\r\nI'll take a look at the OPT thing as well but if it's out of scope I'll probably start another issue to keep the tracking simple.",
"@gante I'm not gonna get this done today but I'll get it knocked out by the end of the week. I just have a bit busier week than I expected ",
"@sam-ulrich1 no worries :) and let me know if you need a hand!",
"@gante How do I run the tests in the repo? I added the below test at the below link so that I can validate my fix. I want to run this test on the CodeGen model but I've never worked with a testing setup like this\r\nhttps://github.com/huggingface/transformers/blob/0359e2e15f4504513fd2995bdd6dd654c747b313/tests/generation/test_utils.py#L1432\r\n\r\n```\r\n def test_contrastive_generate_fp16(self):\r\n # check `generate()` and `contrastive_search()` are equal\r\n for model_class in self.all_generative_model_classes:\r\n\r\n # won't fix: FSMT and Reformer have a different cache variable type (and format).\r\n if any(model_name in model_class.__name__.lower() for model_name in [\"fsmt\", \"reformer\"]):\r\n return\r\n\r\n config, input_ids, attention_mask, max_length = self._get_input_ids_and_config()\r\n\r\n # NOTE: contrastive search only works with cache on at the moment.\r\n if not hasattr(config, \"use_cache\"):\r\n return\r\n config.use_cache = True\r\n config.is_decoder = True\r\n config.torch_dtype = torch.float16\r\n\r\n # test old generation output for backwards compatibility\r\n model = model_class(config).to(torch_device).eval()\r\n output_contrastive, output_generate = self._contrastive_generate(\r\n model=model, input_ids=input_ids, attention_mask=attention_mask, max_length=max_length\r\n )\r\n self.assertListEqual(output_contrastive.tolist(), output_generate.tolist())\r\n```",
"@sam-ulrich1 try `py.test tests/ -k contrastive_generate_fp16 -vv`, assuming you are in `.../transformers/`.\r\n\r\n(`tests/` is the folder containing the test files, `-k` filters tests by name, `contrastive_generate_fp16` is the test name filter based on your test name)",
"Thanks!",
"@gante Okay it seems to be fixed but there is one model that fails the test for (what appears to be) a unrelated problem. What's the procedure for this? Can y'all accept a PR if all the tests don't pass?\r\n\r\nHere's the failing model:\r\n```\r\nFAILED tests/models/git/test_modeling_git.py::GitModelTest::test_contrastive_generate_fp16 - RuntimeError: output with shape [10, 1, 1, 1] doesn't match the broadcast shape [10, 1, 1, 4]\r\n```\r\n\r\nAnd pytest stack trace+\r\n```python\r\n___________________________________________________________________________________________________________ GitModelTest.test_contrastive_generate_fp16 ____________________________________________________________________________________________________________\r\n\r\nself = <tests.models.git.test_modeling_git.GitModelTest testMethod=test_contrastive_generate_fp16>\r\n\r\n def test_contrastive_generate_fp16(self):\r\n # check `generate()` and `contrastive_search()` are equal\r\n for model_class in self.all_generative_model_classes:\r\n \r\n # won't fix: FSMT and Reformer have a different cache variable type (and format).\r\n if any(model_name in model_class.__name__.lower() for model_name in [\"fsmt\", \"reformer\"]):\r\n return\r\n \r\n config, input_ids, attention_mask, max_length = self._get_input_ids_and_config()\r\n \r\n # NOTE: contrastive search only works with cache on at the moment.\r\n if not hasattr(config, \"use_cache\"):\r\n return\r\n config.use_cache = True\r\n config.is_decoder = True\r\n config.torch_dtype = torch.float16\r\n print(config)\r\n \r\n # test old generation output for backwards compatibility\r\n model = model_class(config).to(torch_device).eval()\r\n> output_contrastive, output_generate = self._contrastive_generate(\r\n model=model, input_ids=input_ids, attention_mask=attention_mask, max_length=max_length\r\n )\r\n\r\ntests/generation/test_utils.py:1453: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/generation/test_utils.py:655: in _contrastive_generate\r\n output_generate = model.generate(\r\n../../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/autograd/grad_mode.py:27: in decorate_context\r\n return func(*args, **kwargs)\r\nsrc/transformers/generation/utils.py:1321: in generate\r\n return self.contrastive_search(\r\n../../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/autograd/grad_mode.py:27: in decorate_context\r\n return func(*args, **kwargs)\r\nsrc/transformers/generation/utils.py:1804: in contrastive_search\r\n outputs = self(\r\n../../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/git/modeling_git.py:1478: in forward\r\n outputs = self.git(\r\n../../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = GitModel(\r\n (embeddings): GitEmbeddings(\r\n (word_embeddings): Embedding(99, 32, padding_idx=98)\r\n (position_embedd...n_features=768, out_features=32, bias=True)\r\n (1): LayerNorm((32,), eps=1e-05, elementwise_affine=True)\r\n )\r\n )\r\n)\r\ninput_ids = tensor([[36],\r\n [64],\r\n [41],\r\n [89],\r\n [58],\r\n [72],\r\n [41],\r\n [ 2],\r\n [36],\r\n [64]], device='cuda:0')\r\nattention_mask = tensor([[1, 1, 1, 1],\r\n [1, 1, 1, 1],\r\n [1, 1, 1, 1],\r\n [1, 1, 1, 1],\r\n [1, 1, 1, 1],\r\n [1, 1, 1, 1],\r\n [1, 1, 1, 1],\r\n [1, 1, 1, 1],\r\n [1, 1, 1, 1],\r\n [1, 1, 1, 1]], device='cuda:0')\r\nposition_ids = None, pixel_values = None, head_mask = [None, None, None, None, None], inputs_embeds = None, past_key_values = None, use_cache = True, output_attentions = False, output_hidden_states = True, return_dict = True\r\n\r\n @add_start_docstrings_to_model_forward(GIT_INPUTS_DOCSTRING.format(\"batch_size, sequence_length\"))\r\n @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=_CONFIG_FOR_DOC)\r\n def forward(\r\n self,\r\n input_ids: Optional[torch.Tensor] = None,\r\n attention_mask: Optional[torch.Tensor] = None,\r\n position_ids: Optional[torch.Tensor] = None,\r\n pixel_values: Optional[torch.Tensor] = None,\r\n head_mask: Optional[torch.Tensor] = None,\r\n inputs_embeds: Optional[torch.Tensor] = None,\r\n past_key_values: Optional[List[torch.FloatTensor]] = None,\r\n use_cache: Optional[bool] = None,\r\n output_attentions: Optional[bool] = None,\r\n output_hidden_states: Optional[bool] = None,\r\n return_dict: Optional[bool] = None,\r\n ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPooling]:\r\n r\"\"\"\r\n past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):\r\n Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.\r\n \r\n If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that\r\n don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all\r\n `decoder_input_ids` of shape `(batch_size, sequence_length)`.\r\n use_cache (`bool`, *optional*):\r\n If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see\r\n `past_key_values`).\r\n \r\n Returns:\r\n \r\n Examples:\r\n \r\n ```python\r\n >>> from transformers import AutoProcessor, AutoModel\r\n >>> import requests\r\n >>> from PIL import Image\r\n \r\n >>> processor = AutoProcessor.from_pretrained(\"microsoft/git-base\")\r\n >>> model = AutoModel.from_pretrained(\"microsoft/git-base\")\r\n \r\n >>> url = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\n >>> image = Image.open(requests.get(url, stream=True).raw)\r\n \r\n >>> text = \"this is an image of two cats\"\r\n \r\n >>> inputs = processor(text, images=image, return_tensors=\"pt\")\r\n \r\n >>> outputs = model(**inputs)\r\n >>> last_hidden_state = outputs.last_hidden_state\r\n ```\"\"\"\r\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\r\n output_hidden_states = (\r\n output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\r\n )\r\n use_cache = use_cache if use_cache is not None else self.config.use_cache\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n \r\n if input_ids is not None and inputs_embeds is not None:\r\n raise ValueError(\"You cannot specify both input_ids and inputs_embeds at the same time\")\r\n elif input_ids is not None:\r\n input_shape = input_ids.size()\r\n elif inputs_embeds is not None:\r\n input_shape = inputs_embeds.size()[:-1]\r\n else:\r\n raise ValueError(\"You have to specify either input_ids or inputs_embeds\")\r\n \r\n seq_length = input_shape[1]\r\n \r\n # past_key_values_length\r\n past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0\r\n \r\n # Prepare head mask if needed\r\n # 1.0 in head_mask indicate we keep the head\r\n # attention_probs has shape bsz x n_heads x N x N\r\n # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]\r\n # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]\r\n head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)\r\n \r\n projected_visual_features = None\r\n if pixel_values is not None:\r\n if pixel_values.ndim == 4:\r\n # here we assume pixel_values is of shape (batch_size, num_channels, height, width)\r\n visual_features = self.image_encoder(pixel_values).last_hidden_state\r\n \r\n elif pixel_values.ndim == 5:\r\n # here we assume pixel_values is of shape (batch_size, num_frames, num_channels, height, width)\r\n visual_features = []\r\n for frame_idx in range(pixel_values.shape[1]):\r\n visual_features_frame = self.image_encoder(pixel_values[:, frame_idx, :, :]).last_hidden_state\r\n visual_features_frame += self.img_temperal_embedding[frame_idx]\r\n visual_features.append(visual_features_frame)\r\n \r\n # finally, concatenate all features along sequence dimension\r\n visual_features = torch.cat(visual_features, dim=1)\r\n \r\n else:\r\n raise ValueError(\"pixel_values must be of rank 4 or 5\")\r\n \r\n projected_visual_features = self.visual_projection(visual_features)\r\n \r\n embedding_output = self.embeddings(\r\n input_ids=input_ids,\r\n position_ids=position_ids,\r\n inputs_embeds=inputs_embeds,\r\n past_key_values_length=past_key_values_length,\r\n )\r\n \r\n if projected_visual_features is None:\r\n projected_visual_features = torch.zeros(\r\n (embedding_output.shape[0], 0, embedding_output.shape[2]),\r\n dtype=embedding_output.dtype,\r\n device=embedding_output.device,\r\n )\r\n \r\n # Repeat visual features to match embedding batch size.\r\n projected_visual_features = projected_visual_features.repeat(\r\n embedding_output.size(0) // projected_visual_features.size(0), 1, 1\r\n )\r\n \r\n # concatenate patch token and text token embeddings\r\n hidden_states = torch.cat((projected_visual_features, embedding_output), dim=1)\r\n \r\n # By default, an additive causal mask is created\r\n # for masking the future (one direction).\r\n tgt_mask = self._generate_future_mask(seq_length, embedding_output.dtype, embedding_output.device)\r\n \r\n # Create an attention mask of shape (batch_size, 1, tgt_seq_len, src_seq_len)\r\n combined_attention_mask = self.create_attention_mask(\r\n tgt=embedding_output,\r\n memory=projected_visual_features,\r\n tgt_mask=tgt_mask,\r\n past_key_values_length=past_key_values_length,\r\n )\r\n \r\n if attention_mask is not None:\r\n # if the user provides an attention mask, we add it to the default one\r\n # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]\r\n expanded_attn_mask = _expand_mask(attention_mask, embedding_output.dtype, tgt_len=input_shape[-1]).to(\r\n embedding_output.device\r\n )\r\n if past_key_values_length > 0:\r\n expanded_attn_mask = expanded_attn_mask[:, :, -past_key_values_length:, :]\r\n else:\r\n> combined_attention_mask[:, :, -input_shape[1] :, -input_shape[1] :] += expanded_attn_mask\r\nE RuntimeError: output with shape [10, 1, 1, 1] doesn't match the broadcast shape [10, 1, 1, 4]\r\n```",
"Oh yeah, GIT is a bit different -- it's a multimodal model that requires careful manipulations at generate time. Open a PR with what you have now, I think I can figure out what's wrong with GIT after I have access to the changes :)",
"Jumping here, the error `RuntimeError: \"addmm_impl_cpu_\" not implemented for 'Half'` is just that `Half` only works on `GPU` and should not be used on cpu 😉 ",
"That would make a lot of sense! I didn't address that error in this fix. I focused on `\"baddbmm_with_gemm\" not implemented for 'Half'` but I can take a look at that error over the weekend if you'd like",
"@gante Fix is here rebased to latest commit on main but the PR guidelines are kinda long so I won't be able to create the PR until later\r\nhttps://github.com/gage-technologies/transformers",
"I am having this issue as well. I tried 4.26 and 4.25.1. I am gonna try @sam-ulrich1 solution.",
"The fix did not help. Neither using DeepSpeed nor using vanilla Transformers. Using bfloat16 gives me expected results(but I need float16 for DeepSpeed)",
"I take back what I said. I am not having this issue at all. With or withou t @sam-ulrich1 fix, it is working fine. The issue is with DeepSpeed. ",
"I'm also facing a similar issue: \r\n```py\r\ngenerator = pipeline(\"text2text-generation\", model=\"philschmid/flan-t5-xxl-sharded-fp16\", model_kwargs={\"load_in_8bit\":True, \"device_map\": \"auto\"})\r\noutput = generator(prompt, penalty_alpha=0.6, top_k=4, max_length=256)\r\n```\r\n\r\nGives me the error: \r\n```\r\nRuntimeError: \"softmax_lastdim_kernel_impl\" not implemented for 'Half'\r\n```\r\n\r\nSo contrastive search seems not compatible with loading the model in 8-bit. Is that expected or a bug? ",
"@sam-ulrich1 do you have some updates on your end? I can open a PR from the changes in your fork, if you're interested :)",
"@gante Shoot! Sorry man this slipped my mind. Let me take a look.at the PR guidelines again and see if I can get mine rebased and prepped and if not then I'm happy to let you. \n\nThanks man!",
"Just to flag, the error I faced [here ](https://github.com/huggingface/transformers/issues/21151#issuecomment-1427589596) still exists with @sam-ulrich1's fix. Should I open a new Issue as this may be related specifically to 8-bit?",
"@gante I'm gonna look at this today. Sorry man, I've been slammed with work the past month",
"@gante If you want to just snag my changes go ahead otherwise I will eventually get to this it's just been a really tough few weeks",
"BTW I'm not sure if this fix is still needed, I am unable to reproduce the issue on `main`.\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\n\r\ntok = AutoTokenizer.from_pretrained(\"distilgpt2\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"distilgpt2\", torch_dtype=torch.float16).to(\"cuda\")\r\n\r\ninputs = tok([\"This cat is\"], return_tensors=\"pt\").to(\"cuda\")\r\ngen_out = model.generate(**inputs, top_k=4, penalty_alpha=0.6)\r\n```\r\n\r\nIf someone else comes across this issue, please let me know 🙏 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,679
| 1,679
|
NONE
| null |
### System Info
The CLI fails but this is irrelevant to the problem
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Load any model like so
```
model = AutoModelForCausalLM.from_pretrained(
"<PATH>",
torch_dtype=torch.float16,
)
```
2. Perform generation using contrastive search
```
gen_tokens = model.generate(
tokenized_input.input_ids,
top_k=4,
penalty_alpha=0.6
)
```
### Expected behavior
Contrastive search probably should work with torch.float16 (if not just let me know - idk if there are stability issues).
This can be fixed by adding the following code to https://github.com/huggingface/transformers/blob/25ddd91b249014d818fb2ed3d4ba856ed9a5653e/src/transformers/generation/utils.py#L1873
```
# conditionally convert from float16
if context_hidden.dtype == torch.float16:
context_hidden = context_hidden.to(dtype=torch.float32)
if next_hidden.dtype == torch.float16:
next_hidden = next_hidden.to(dtype=torch.float32)
if top_k_probs.dtype == torch.float16:
top_k_probs = top_k_probs.to(dtype=torch.float32)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21151/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21150
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21150/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21150/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21150/events
|
https://github.com/huggingface/transformers/pull/21150
| 1,536,283,439
|
PR_kwDOCUB6oc5HiXXN
| 21,150
|
OPT: Fix batched generation with FLAX
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,674
| 1,674
|
MEMBER
| null |
# What does this PR do?
Fixes #20666
OPT numerical masking was using `-inf` where the attention mask was `0`, which in turn caused the hidden states to be `nan` and derail the whole generation. Changing to a common masking value (`-1e9`) fixes the issue. I've also taken the opportunity to re-enable the commented out tests :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21150/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21150",
"html_url": "https://github.com/huggingface/transformers/pull/21150",
"diff_url": "https://github.com/huggingface/transformers/pull/21150.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21150.patch",
"merged_at": 1674051894000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21149
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21149/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21149/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21149/events
|
https://github.com/huggingface/transformers/pull/21149
| 1,536,209,696
|
PR_kwDOCUB6oc5HiHUz
| 21,149
|
Generate: TF contrastive search must pop `use_cache` from `model_kwargs`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
MEMBER
| null |
# What does this PR do?
Fixes a slow test that broke with #20994
(actually, more like ~25 slow tests 😅 )
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21149/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21149",
"html_url": "https://github.com/huggingface/transformers/pull/21149",
"diff_url": "https://github.com/huggingface/transformers/pull/21149.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21149.patch",
"merged_at": 1673962973000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21148
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21148/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21148/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21148/events
|
https://github.com/huggingface/transformers/pull/21148
| 1,536,032,468
|
PR_kwDOCUB6oc5HhhV3
| 21,148
|
Fixed num_channels!=3 normalization training [#20630]
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@NielsRogge CI is green, but I don't review (yet) the content of PR though",
"@NielsRogge \r\n\r\nI think it is already there: when I clicked `Squash and merge`, it shows at the end\r\n\r\n```\r\nCo-authored-by: Lay Jain <layjain@basil.csail.mit.edu>\r\nCo-authored-by: ydshieh <ydshieh@users.noreply.github.com>\r\n```",
"cc @layjain ",
"The CI got fixed on the other PR so I merged it. Is there still a need for this one?",
"@sgugger OK, this morning it was not running even I pushed a commit. No more need of this PR - despite I have a question regarding the logic. We can fix it if @NielsRogge think what I pointed is indeed a bug."
] | 1,673
| 1,675
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
Fork from #20630 (which is to fix https://github.com/huggingface/transformers/issues/20580 and https://github.com/huggingface/transformers/issues/19913)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21148/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21148",
"html_url": "https://github.com/huggingface/transformers/pull/21148",
"diff_url": "https://github.com/huggingface/transformers/pull/21148.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21148.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21147
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21147/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21147/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21147/events
|
https://github.com/huggingface/transformers/issues/21147
| 1,535,975,165
|
I_kwDOCUB6oc5bjR79
| 21,147
|
Fine tunning of Donut rvl-cdip throughing an error. module 'google.protobuf.descriptor' has no attribute '_internal_create_key'
|
{
"login": "AIritik",
"id": 122866058,
"node_id": "U_kgDOB1LJig",
"avatar_url": "https://avatars.githubusercontent.com/u/122866058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AIritik",
"html_url": "https://github.com/AIritik",
"followers_url": "https://api.github.com/users/AIritik/followers",
"following_url": "https://api.github.com/users/AIritik/following{/other_user}",
"gists_url": "https://api.github.com/users/AIritik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AIritik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AIritik/subscriptions",
"organizations_url": "https://api.github.com/users/AIritik/orgs",
"repos_url": "https://api.github.com/users/AIritik/repos",
"events_url": "https://api.github.com/users/AIritik/events{/privacy}",
"received_events_url": "https://api.github.com/users/AIritik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This looks like an error we had with recent versions of protobuf. Are you absolutely sure the env information you are pasting is correct? Could you try doing `pip install protobuf<=3.20.2 --upgrade` in your env?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,677
| 1,677
|
NONE
| null |
When I am trying to run these lines....
**from transformers import DonutProcessor, VisionEncoderDecoderModel, BartConfig
processor = DonutProcessor.from_pretrained("nielsr/donut-base")
model = VisionEncoderDecoderModel.from_pretrained("nielsr/donut-base", config=config)**
Getting this error
AttributeError Traceback (most recent call last)
/tmp/ipykernel_3566149/2070905111.py in
2 from transformers import AutoTokenizer, AutoModel
3
----> 4 tokenizer = AutoTokenizer.from_pretrained("nielsr/donut-base")
5
6 model = AutoModel.from_pretrained("nielsr/donut-base")
~/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
653 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported."
654 )
--> 655 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
656
657 # Otherwise we have to be creative.
~/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1799 [logger.info](http://logger.info/)(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}")
1800
-> 1801 return cls._from_pretrained(
1802 resolved_vocab_files,
1803 pretrained_model_name_or_path,
~/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, local_files_only, _commit_hash, *init_inputs, **kwargs)
1954 # Instantiate tokenizer.
1955 try:
-> 1956 tokenizer = cls(*init_inputs, **init_kwargs)
1957 except OSError:
1958 raise OSError(
~/.local/lib/python3.8/site-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py in init(self, vocab_file, tokenizer_file, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, **kwargs)
153 mask_token = AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token
154
--> 155 super().init(
156 vocab_file,
157 tokenizer_file=tokenizer_file,
~/.local/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py in init(self, *args, **kwargs)
116 # We need to create and convert a slow tokenizer to build the backend
117 slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs)
--> 118 fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
119 else:
120 raise ValueError(
~/.local/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py in convert_slow_tokenizer(transformer_tokenizer)
1160 converter_class = SLOW_TO_FAST_CONVERTERS[tokenizer_class_name]
1161
-> 1162 return converter_class(transformer_tokenizer).converted()
~/.local/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py in init(self, *args)
436 super().init(*args)
437
--> 438 from .utils import sentencepiece_model_pb2 as model_pb2
439
440 m = model_pb2.ModelProto()
~/.local/lib/python3.8/site-packages/transformers/utils/sentencepiece_model_pb2.py in
32 syntax="proto2",
33 serialized_options=b"H\003",
---> 34 create_key=_descriptor._internal_create_key,
35 serialized_pb=(
36 b'\n\x19sentencepiece_model.proto\x12\rsentencepiece"\xa1\n\n\x0bTrainerSpec\x12\r\n\x05input\x18\x01'
AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'
In the env I have installed these libraries
aiohttp==3.8.3
aiosignal==1.3.1
async-timeout==4.0.2
attrs==22.2.0
brotlipy==0.7.0
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi @ file:///croot/cffi_1670423208954/work
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
cryptography @ file:///croot/cryptography_1673298753778/work
datasets==2.8.0
dill==0.3.6
filelock==3.9.0
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
frozenlist==1.3.3
fsspec==2022.11.0
huggingface-hub==0.11.1
idna @ file:///croot/idna_1666125576474/work
mkl-fft==1.3.1
mkl-random @ file:///home/builder/ci_310/mkl_random_1641843545607/work
mkl-service==2.4.0
multidict==6.0.4
multiprocess==0.70.14
numpy @ file:///croot/numpy_and_numpy_base_1672336185480/work
packaging==23.0
pandas==1.5.2
Pillow==9.3.0
protobuf==3.19.6
pyarrow==10.0.1
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pyOpenSSL @ file:///opt/conda/conda-bld/pyopenssl_1643788558760/work
PySocks @ file:///home/builder/ci_310/pysocks_1640793678128/work
python-dateutil==2.8.2
pytz==2022.7.1
PyYAML==6.0
regex==2022.10.31
requests @ file:///opt/conda/conda-bld/requests_1657734628632/work
responses==0.18.0
sentencepiece==0.1.97
six @ file:///tmp/build/80754af9/six_1644875935023/work
tokenizers==0.13.2
torch==1.13.1
torchaudio==0.13.1
torchvision==0.14.1
tqdm==4.64.1
transformers @ git+https://github.com/huggingface/transformers.git@2411f0e465e761790879e605a4256f3d4afb7f82
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
urllib3 @ file:///croot/urllib3_1670526988650/work
xxhash==3.2.0
yarl==1.8.2
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21147/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21146
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21146/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21146/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21146/events
|
https://github.com/huggingface/transformers/pull/21146
| 1,535,890,700
|
PR_kwDOCUB6oc5HhDhX
| 21,146
|
fix the issue that the output dict of jit model could not get [:2]
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@yao-matrix",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
"TypeError: unhashable type: 'slice'"
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
- pipelines: @Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21146/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21146",
"html_url": "https://github.com/huggingface/transformers/pull/21146",
"diff_url": "https://github.com/huggingface/transformers/pull/21146.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21146.patch",
"merged_at": 1674052889000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21145
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21145/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21145/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21145/events
|
https://github.com/huggingface/transformers/issues/21145
| 1,535,717,928
|
I_kwDOCUB6oc5biTIo
| 21,145
|
Unable to load weights from pytorch checkpoint file
|
{
"login": "manandey",
"id": 6687858,
"node_id": "MDQ6VXNlcjY2ODc4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manandey",
"html_url": "https://github.com/manandey",
"followers_url": "https://api.github.com/users/manandey/followers",
"following_url": "https://api.github.com/users/manandey/following{/other_user}",
"gists_url": "https://api.github.com/users/manandey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manandey/subscriptions",
"organizations_url": "https://api.github.com/users/manandey/orgs",
"repos_url": "https://api.github.com/users/manandey/repos",
"events_url": "https://api.github.com/users/manandey/events{/privacy}",
"received_events_url": "https://api.github.com/users/manandey/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @manandey,\r\nHere is the error I get: \r\n```\r\nRuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory\r\n```\r\nYour saved model is probably corrupted, could you tell us how did you saved the model, or alternatively give it another try?",
"Hi @younesbelkada, when I try to load the model by connecting my colab to the GCP instance where the checkpoints are saved, the model seems to be loading perfectly fine as can be seen in the attached snapshot. But when I try to download the checkpoint and upload it to hub, the downloaded files seem to become corrupted. \r\n\r\nTo download the checkpoint, I am using something like this: `!tar -zcvf checkpoints.tar.gz checkpoint/checkpoint-300000`\r\n\r\n\r\n",
"If I understood correctly you are manually uploading the weights on the Hub?\r\nCan you maybe try: \r\n```\r\nmodel.push_to_hub(\"taskydata/deberta-v3-base_10xp3nirstbbflanseuni_10xc4\")\r\n```\r\nafter the lines that you have attached above. \r\nMake sure to login from your notebook with:\r\n```\r\nfrom huggingface_hub import notebook_login\r\nnotebook_login()\r\n```",
"Thanks a lot, @younesbelkada! It worked! ",
"Very happy that it worked! Thanks for guiding us precisely through your issue"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
### System Info
I trained a model and uploaded the checkpoint in the Hub [here](https://huggingface.co/taskydata/deberta-v3-base_10xp3nirstbbflanseuni_10xc4). When I try to load the model, I get the following error message:
`OSError: Unable to load weights from pytorch checkpoint file for '/root/.cache/huggingface/hub/models--taskydata--deberta-v3-base_10xp3nirstbbflanseuni_10xc4/snapshots/f5d6b49731ea0b36601f151dd67623380462a3cb/pytorch_model.bin' at '/root/.cache/huggingface/hub/models--taskydata--deberta-v3-base_10xp3nirstbbflanseuni_10xc4/snapshots/f5d6b49731ea0b36601f151dd67623380462a3cb/pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.`
Also, when I try to run it using the inference API, I get:
`Could not load model taskydata/deberta-v3-base_10xp3nirstbbflanseuni_10xc4 with any of the following classes: (<class 'transformers.models.deberta_v2.modeling_deberta_v2.DebertaV2ForSequenceClassification'>, <class 'transformers.models.deberta_v2.modeling_tf_deberta_v2.TFDebertaV2ForSequenceClassification'>).`
Transformer version: `4.25.1`
Any help on how to resolve this would be greatly appreciated. Thanks!
cc. @LysandreJik @younesbelkada @ArthurZucker
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Mentioned above.
### Expected behavior
The model should be loading since all the files are uploaded.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21145/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21144
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21144/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21144/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21144/events
|
https://github.com/huggingface/transformers/pull/21144
| 1,535,367,629
|
PR_kwDOCUB6oc5HfUTA
| 21,144
|
Accept batched tensor of images as input to image processor
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Adds functionality to `image_utils` so that a batched tensor of images can be accepted as input to the image processors.
Fixes #21142 #14650
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21144/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21144",
"html_url": "https://github.com/huggingface/transformers/pull/21144",
"diff_url": "https://github.com/huggingface/transformers/pull/21144.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21144.patch",
"merged_at": 1674728126000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21143
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21143/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21143/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21143/events
|
https://github.com/huggingface/transformers/pull/21143
| 1,535,214,708
|
PR_kwDOCUB6oc5HezeT
| 21,143
|
Fixes to TF collators
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
MEMBER
| null |
This PR makes a couple of fixes to TF data collators:
1) Fixes an incorrect call to `is_integer` when the default data collator receives `tf.Tensor` input and is outputting `tf.Tensor` as well
2) Prefer "np" tensors rather than "tf" tensors when calling our collators via `to_tf_dataset`. This is because data preprocessing is generally done with `np.ndarray` rather than `tf.Tensor` anyway, and Keras/`tf.data` can do the final conversion to `tf.Tensor` for us.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21143/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21143",
"html_url": "https://github.com/huggingface/transformers/pull/21143",
"diff_url": "https://github.com/huggingface/transformers/pull/21143.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21143.patch",
"merged_at": 1673957937000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21142
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21142/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21142/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21142/events
|
https://github.com/huggingface/transformers/issues/21142
| 1,535,162,783
|
I_kwDOCUB6oc5bgLmf
| 21,142
|
Error when passing a tensor of images to CLIPProcessor
|
{
"login": "AntreasAntoniou",
"id": 10792502,
"node_id": "MDQ6VXNlcjEwNzkyNTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AntreasAntoniou",
"html_url": "https://github.com/AntreasAntoniou",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}",
"gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions",
"organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs",
"repos_url": "https://api.github.com/users/AntreasAntoniou/repos",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Seems like `is_batched` is at fault for this. Doc seems a bit lacking 😅 \r\nShould probably be replaced with : \r\n```python \r\ndef is_batched(img):\r\n if isinstance(img, (list, tuple)):\r\n return is_valid_image(img[0])\r\n return is_valid_image(img)\r\n```\r\nPretty sure this is expected as our tests are run on `lists` or `tuples` of images. ",
"@ArthurZucker @AntreasAntoniou Yep - it's down to how batches are checked, and the processing classes (feature extractors, tokenizers, image processors) expect either a single object e.g. image or a list/tuple of objects. It should be possible to take a batched tensor and create a list from it, we just have to be careful in our assumptions. I'll look into it. \r\n\r\nThe example: \r\n```\r\ndef is_batched(img):\r\n if isinstance(img, (list, tuple)):\r\n return is_valid_image(img[0])\r\n return is_valid_image(img)\r\n```\r\nwon't work, as the check for `is_batched` determines whether the input needs to be wrapped in a list. This is because the image processors iterate over a list of images i.e. a single image would return `True` here and break things downstream. "
] | 1,673
| 1,674
| 1,674
|
NONE
| null |
### System Info
- huggingface_hub version: 0.11.1
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.10.8
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /root/.huggingface/token
- Has saved token ?: False
- Configured git credential helpers: !f()
- FastAI: N/A
- Tensorflow: 2.11.0
- Torch: 1.13.1
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
### Who can help?
@ArthurZucker @amyeroberts @NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Run the following script:
```python
from transformers import CLIPProcessor
import torch
model_name_or_path = "openai/clip-vit-large-patch14"
processor: CLIPProcessor = CLIPProcessor.from_pretrained(
model_name_or_path
)
dummy_input = torch.randn(10, 3, 224, 224)
dummy_output = processor(images=dummy_input, return_tensors="pt")
```
2. Look at the monitor to see the error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File /opt/conda/envs/main/lib/python3.10/site-packages/PIL/Image.py:2953, in fromarray(obj, mode)
2952 try:
-> 2953 mode, rawmode = _fromarray_typemap[typekey]
2954 except KeyError as e:
KeyError: ((1, 1, 224, 224), '|u1')
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
Cell In[2], line 10
4 processor: CLIPProcessor = CLIPProcessor.from_pretrained(
5 model_name_or_path
6 )
8 dummy_input = torch.randn(10, 3, 224, 224)
---> 10 dummy_output = processor(images=dummy_input, return_tensors="pt")
File /opt/conda/envs/main/lib/python3.10/site-packages/transformers/models/clip/processing_clip.py:85, in CLIPProcessor.__call__(self, text, images, return_tensors, **kwargs)
82 encoding = self.tokenizer(text, return_tensors=return_tensors, **kwargs)
84 if images is not None:
---> 85 image_features = self.feature_extractor(images, return_tensors=return_tensors, **kwargs)
87 if text is not None and images is not None:
88 encoding["pixel_values"] = image_features.pixel_values
...
-> 2955 raise TypeError("Cannot handle this data type: %s, %s" % typekey) from e
2956 else:
2957 rawmode = mode
TypeError: Cannot handle this data type: (1, 1, 224, 224), |u1
```
### Expected behavior
The function should return a preprocessed tensor containing a batch of images.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21142/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21142/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21141
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21141/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21141/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21141/events
|
https://github.com/huggingface/transformers/pull/21141
| 1,535,058,199
|
PR_kwDOCUB6oc5HeRuD
| 21,141
|
feat: add standalone guide on XLA support.
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sayakpaul \r\n\r\nIt seems there is an issue with your CircleCI permissions.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?",
"@ydshieh I don't have CircleCI signed in nor have I installed it here: https://github.com/settings/applications. ",
"Could you login CircleCI with your GitHub account, and follow `huggingface/transformers`?\r\n\r\nSee https://circleci.com/docs/projects/\r\n\r\n(but it's kinda strange - you have opened a lot of PRs before, so not sure why we have this issue now)",
"Should be all good now I guess:\r\n\r\n<img width=\"896\" alt=\"image\" src=\"https://user-images.githubusercontent.com/22957388/212717447-19e1b042-d39f-44f0-bf8c-faef7b97c2ea.png\">\r\n",
"Yeah, then we can try to push an empty commit to see if the CI will run :-)\r\n```\r\ngit commit --allow-empty -m \"Empty commit to trigger CI\"\r\ngit push\r\n```\r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger addressed your comments including [this one](https://github.com/huggingface/transformers/pull/21141/files#r1072172489). Let me know if I am good to merge given all tests are green. ",
"Yes, all good!"
] | 1,673
| 1,673
| 1,673
|
MEMBER
| null |
# What does this PR do?
We have had XLA support for our TF generation models (GPT2, Whisper, for example) for a while. This PR adds a standalone guide in the doc to discuss it.
Cc: @Rocketknight1 @amyeroberts @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21141/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21141",
"html_url": "https://github.com/huggingface/transformers/pull/21141",
"diff_url": "https://github.com/huggingface/transformers/pull/21141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21141.patch",
"merged_at": 1673964480000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21140
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21140/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21140/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21140/events
|
https://github.com/huggingface/transformers/pull/21140
| 1,535,003,285
|
PR_kwDOCUB6oc5HeF2c
| 21,140
|
Rename test_feature_extraction files
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you @amyeroberts !\r\n\r\nI think there are a few lines in `tests/utils/test_add_new_model_like.py` to be changed, like\r\n\r\nhttps://github.com/amyeroberts/transformers/blob/6b10c045e259a47f3786ceb11089fe418828346e/tests/utils/test_add_new_model_like.py#L62\r\n",
"@ydshieh Done! "
] | 1,673
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
Renames the tests for image processors: `test_feature_extraction_xxx.py` -> `test_image_processing_xxx.py`
A follow up PR will change the feature extractor references to equivalent image processor ones in the files.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21140/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21140",
"html_url": "https://github.com/huggingface/transformers/pull/21140",
"diff_url": "https://github.com/huggingface/transformers/pull/21140.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21140.patch",
"merged_at": 1673964248000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21139
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21139/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21139/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21139/events
|
https://github.com/huggingface/transformers/pull/21139
| 1,534,954,008
|
PR_kwDOCUB6oc5Hd7Kk
| 21,139
|
Added clefourrier as ref point for graph models in bug reports
|
{
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Also, I missed this file\r\n```\r\n.github/PULL_REQUEST_TEMPLATE.md\r\n```\r\nif you think it is relevant.",
"Add @sgugger for final review as I never changed these files before, and this is also more administrative decision :-)",
"Did not see the comment section in PULL_REQUEST_TEMPLATE when I opened them in browser view - edited!\r\n\r\nEdit: However, not sure what I could add to `feature-request`?",
"> Did not see the comment section in PULL_REQUEST_TEMPLATE when I opened them in browser view - edited!\r\n> \r\n> Edit: However, not sure what I could add to `feature-request`?\r\n\r\nMy bad, my brain is not completely recovered from all the `drink` I had last week."
] | 1,673
| 1,673
| 1,673
|
MEMBER
| null |
# What does this PR do?
Added myself as entry point for graph models in issue doc.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21139/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21139",
"html_url": "https://github.com/huggingface/transformers/pull/21139",
"diff_url": "https://github.com/huggingface/transformers/pull/21139.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21139.patch",
"merged_at": 1673878363000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21138
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21138/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21138/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21138/events
|
https://github.com/huggingface/transformers/issues/21138
| 1,534,768,182
|
I_kwDOCUB6oc5berQ2
| 21,138
|
Feature Request: Flax Whisper
|
{
"login": "OhadRubin",
"id": 4252994,
"node_id": "MDQ6VXNlcjQyNTI5OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4252994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OhadRubin",
"html_url": "https://github.com/OhadRubin",
"followers_url": "https://api.github.com/users/OhadRubin/followers",
"following_url": "https://api.github.com/users/OhadRubin/following{/other_user}",
"gists_url": "https://api.github.com/users/OhadRubin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OhadRubin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OhadRubin/subscriptions",
"organizations_url": "https://api.github.com/users/OhadRubin/orgs",
"repos_url": "https://api.github.com/users/OhadRubin/repos",
"events_url": "https://api.github.com/users/OhadRubin/events{/privacy}",
"received_events_url": "https://api.github.com/users/OhadRubin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @OhadRubin! Thanks to the fantastic work by @andyehrenberg it's nearly complete: https://github.com/huggingface/transformers/pull/20479\r\n\r\nWill do a final review tomorrow and then get it merged asap!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,676
| 1,676
|
NONE
| null |
### Feature request
It already has TF and Pytorch support. Would be nice to use it on Flax as-well.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21138/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21137
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21137/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21137/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21137/events
|
https://github.com/huggingface/transformers/issues/21137
| 1,534,747,033
|
I_kwDOCUB6oc5bemGZ
| 21,137
|
microsoft/markuplm-base-finetuned-websrc fails when used in a `question-answering` pipeline
|
{
"login": "juliensimon",
"id": 3436143,
"node_id": "MDQ6VXNlcjM0MzYxNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3436143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juliensimon",
"html_url": "https://github.com/juliensimon",
"followers_url": "https://api.github.com/users/juliensimon/followers",
"following_url": "https://api.github.com/users/juliensimon/following{/other_user}",
"gists_url": "https://api.github.com/users/juliensimon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juliensimon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juliensimon/subscriptions",
"organizations_url": "https://api.github.com/users/juliensimon/orgs",
"repos_url": "https://api.github.com/users/juliensimon/repos",
"events_url": "https://api.github.com/users/juliensimon/events{/privacy}",
"received_events_url": "https://api.github.com/users/juliensimon/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nMarkupLM isn't supported by the QA pipeline, similar to how LayoutLM (and v2 and v3) aren't supported by it.\r\n\r\nThe reason for this is that MarkupLM requires additional inputs besides the ones that text models require, like `input_ids` and `attention_mask`. For LayoutLM, we created a separate `DocumentQuestionAnsweringPipeline` to account for this.",
"Thanks for the answer. Until we have a proper pipeline for this model, could we add your explanation to the docs? If I couldn't figure it out, I suspect many more users will hit the same issue :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@NielsRogge I would like to work on this issue if it is still available.",
"@NielsRogge I can create a `MarkupQuestionAnsweingPipeline` for this.",
"This model is highly specific actually, not sure if a pipeline for it makes sense.\r\n\r\nWill remove the good first issue from this, might be better to just overwrite the preprocess and postprocess steps of the QA pipeline to make it work for MarkupLM. "
] | 1,673
| 1,679
| 1,679
|
NONE
| null |
### System Info
Python 3.9.7
Transformers 4.25.1
`microsoft/markuplm-base-finetuned-websrc` fails when used in a `question-answering` pipeline (see test script below).
```
Traceback (most recent call last):
File "/Users/juliensimon/markuplm/app.py", line 24, in <module>
result = pipe(question="What are the trending stocks?", context=page)
File "/Users/juliensimon/Envs/vscodedemo/lib/python3.9/site-packages/transformers/pipelines/question_answering.py", line 392, in __call__
return super().__call__(examples[0], **kwargs)
File "/Users/juliensimon/Envs/vscodedemo/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1074, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/Users/juliensimon/Envs/vscodedemo/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1095, in run_single
for model_inputs in self.preprocess(inputs, **preprocess_params):
File "/Users/juliensimon/Envs/vscodedemo/lib/python3.9/site-packages/transformers/pipelines/question_answering.py", line 403, in preprocess
max_seq_len = min(self.tokenizer.model_max_length, 384)
AttributeError: 'MarkupLMProcessor' object has no attribute 'model_max_length'
```
Setting `max_seq_length` in the pipeline call solves the issue, but a similar one happens with the `is_fast` attribute. Both are indeed not defined in [MarkupLMProcessor](https://github.com/huggingface/transformers/blob/05b8e25fffd61feecb21928578ad39e63af21b4f/src/transformers/models/markuplm/processing_markuplm.py#L25).
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import requests
from transformers import (
pipeline,
AutoModelForQuestionAnswering,
MarkupLMProcessor,
)
def get_page(url):
response = requests.get(url)
return response.text
model = "microsoft/markuplm-base-finetuned-websrc"
processor = MarkupLMProcessor.from_pretrained(model)
model = AutoModelForQuestionAnswering.from_pretrained(model)
pipe = pipeline("question-answering", model=model, tokenizer=processor)
url = "https://finance.yahoo.com"
page = get_page(url)
result = pipe(question="What are the trending stocks?", context=page)
```
### Expected behavior
I would expect the pipeline to work. If this model isn't supported, then we should make it clear in the [docs](https://huggingface.co/docs/transformers/model_doc/markuplm).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21137/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21136
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21136/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21136/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21136/events
|
https://github.com/huggingface/transformers/pull/21136
| 1,534,731,378
|
PR_kwDOCUB6oc5HdKqw
| 21,136
|
Fix `RealmModelIntegrationTest.test_inference_open_qa`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
The test `RealmModelIntegrationTest::test_inference_open_qa` fails since my PR #21000. This integration test creates a config manually by `config = RealmConfig()` and passes it to `from_pretrained`, therefore it doesn't have the attribute `searcher_seq_len` (which is removed in #21000).
Using `from_pretrained` without specifying `config` will load from the Hub checkpoint which has `searcher_seq_len` in the config (loaded via `super().__init__(..., **kwargs)`), and fix the test.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21136/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21136",
"html_url": "https://github.com/huggingface/transformers/pull/21136",
"diff_url": "https://github.com/huggingface/transformers/pull/21136.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21136.patch",
"merged_at": 1673878193000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21135
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21135/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21135/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21135/events
|
https://github.com/huggingface/transformers/issues/21135
| 1,534,726,821
|
I_kwDOCUB6oc5behKl
| 21,135
|
AttributeError: module 'tensorflow' has no attribute 'Tensor' when using documentation code (Tokenizer.batch_decode)
|
{
"login": "gaurav-95",
"id": 59512848,
"node_id": "MDQ6VXNlcjU5NTEyODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/59512848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaurav-95",
"html_url": "https://github.com/gaurav-95",
"followers_url": "https://api.github.com/users/gaurav-95/followers",
"following_url": "https://api.github.com/users/gaurav-95/following{/other_user}",
"gists_url": "https://api.github.com/users/gaurav-95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gaurav-95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaurav-95/subscriptions",
"organizations_url": "https://api.github.com/users/gaurav-95/orgs",
"repos_url": "https://api.github.com/users/gaurav-95/repos",
"events_url": "https://api.github.com/users/gaurav-95/events{/privacy}",
"received_events_url": "https://api.github.com/users/gaurav-95/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @gaurav-95 \r\nPlease run `transformers-cli env` in terminal and share the full system info so it's easier to reproduce the error.",
"transformers-cli env gives me this. \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python38\\lib\\runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python38\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\Admin\\Desktop\\Projects\\NLP_Cron\\cronenv\\Scripts\\transformers-cli.exe\\__main__.py\", line 4, in <module>\r\n File \"C:\\Users\\Admin\\Desktop\\Projects\\NLP_Cron\\cronenv\\lib\\site-packages\\transformers\\commands\\transformers_cli.py\", line 24, in <module>\r\n from .pt_to_tf import PTtoTFCommand\r\n File \"C:\\Users\\Admin\\Desktop\\Projects\\NLP_Cron\\cronenv\\lib\\site-packages\\transformers\\commands\\pt_to_tf.py\", line 46, in <module>\r\n tf.config.experimental.enable_tensor_float_32_execution(False)\r\nAttributeError: module 'tensorflow' has no attribute 'config'\r\n```\r\n\r\nCould you elaborate on what system info do you need?\r\nIm running on an Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz 1.19 GHz\r\n20.0 GB (19.8 GB usable) RAM\r\n\r\nNo dedicated gpu in machine. My virtualenvironment is called \"cronenv\"\r\n\r\nUpdate: I was able to run the same code on a google colab notebook, seems like a problem with my environment.",
"Hi, @gaurav-95 \r\nActually if you run the above code it should output something like this, \r\n- `transformers` version: 4.25.1\r\n- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35\r\n- Python version: 3.9.15\r\n- Huggingface_hub version: 0.11.1\r\n- PyTorch version (GPU?): 1.13.1 (True)\r\n- Tensorflow version (GPU?): 2.10.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\nSince you are not getting it could you please check your transformers installation? (just run `import transformers` and check if it successfully imports or gives an error) \r\n \r\n",
"Thanks for getting back and hinting towards the problem. I can confirm there was something wrong with my python installation. \r\n\r\nSteps that resolved it for me.\r\n\r\n1. I made a requirements file of the existing install\r\n2. I deleted the existing virtual environment.\r\n3. Re-installed python.\r\n4. Re-installed dependencies from saved requirements file.\r\n5. Ran code and it works now!",
"> Thanks for getting back and hinting towards the problem. I can confirm there was something wrong with my python installation.\r\n> \r\n> Steps that resolved it for me.\r\n> \r\n> 1. I made a requirements file of the existing install\r\n> 2. I deleted the existing virtual environment.\r\n> 3. Re-installed python.\r\n> 4. Re-installed dependencies from saved requirements file.\r\n> 5. Ran code and it works now!\r\n\r\nLOL, I got the same thing happen to me as well I think??? Ill give this a try :) BTW how did you delete the env?"
] | 1,673
| 1,675
| 1,673
|
NONE
| null |
### System Info
Windows 10, VSCode
### Who can help?
_No response_
### Information
I was referring to the documentation on huggingface to run the facebook OPT model here:
https://huggingface.co/docs/transformers/main/en/model_doc/opt#transformers.OPTForCausalLM
And I've received the following error on my Windows 10 machine in VScode.
```
Traceback (most recent call last):
File "c:/Users/Admin/Desktop/Projects/NLP_Cron/script_chat.py", line 11, in <module>
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\tokenization_utils_base.py", line 3429, in batch_decode
return [
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\tokenization_utils_base.py", line 3430, in <listcomp>
self.decode(
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\tokenization_utils_base.py", line 3466, in decode
token_ids = to_py_obj(token_ids)
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\utils\generic.py", line 160, in to_py_obj
elif is_tf_tensor(obj):
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\utils\generic.py", line 136, in is_tf_tensor
return False if not is_tf_available() else _is_tensorflow(x)
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\utils\generic.py", line 129, in _is_tensorflow
return isinstance(x, tf.Tensor)
AttributeError: module 'tensorflow' has no attribute 'Tensor'
```
I first thought it was specific to this model, But I'm facing the same issue on other models.
I have tried uninstalling TensorFlow and reinstalling it.
I have upgraded "transformers" library as well. But to no avail. This seems to be a recent problem.
The virtual environment I'm using says I have these versions of tensorflow and transformers.
- transformers 4.25.1
- tensorflow 2.11.0
### Reproduction
Steps to reproduce the behaviour:
1. Go to https://huggingface.co/docs/transformers/main/en/model_doc/opt#transformers.tensorflow
2. Run the example snippet consisting of this code
```
from transformers import GPT2Tokenizer, OPTForCausalLM
model = OPTForCausalLM.from_pretrained("facebook/opt-350m")
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m")
prompt = "Hey, are you consciours? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])
```
Assuming required libraries are installed error message shows up.
### Expected behavior
Expected the output as per shown in documentation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21135/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21134
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21134/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21134/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21134/events
|
https://github.com/huggingface/transformers/pull/21134
| 1,534,720,239
|
PR_kwDOCUB6oc5HdIPk
| 21,134
|
Add ConvNeXt-V2 Model
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It does look like the model code is exactly the same at a first glance (saw everything is copied from ConvNext). If that is the case, yes to re-using the code of ConvNext, but if we need to make modifications in the convnext modeling file, we should add ConvNext V2 as a new model like in the PR.",
"> It does look like the model code is exactly the same at a first glance (saw everything is copied from ConvNext). If that is the case, yes to re-using the code of ConvNext, but if we need to make modifications in the convnext modeling file, we should add ConvNext V2 as a new model like in the PR.\r\n\r\nYes, the code is almost the same, but it adds a Global Response Normalization (GRN) module and removes the layer_scale_parameter from the ConvNeXtV2Layer. Makes more sense to add it as a new model then.\r\n\r\nCC @IMvision12 ",
"Thanks for the review @alaradirik I will address all comments!",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds ConvNeXt-V2 to transformers.
original repo: https://github.com/facebookresearch/ConvNeXt-V2
paper: https://arxiv.org/abs/2301.00808
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@alaradirik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21134/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21134",
"html_url": "https://github.com/huggingface/transformers/pull/21134",
"diff_url": "https://github.com/huggingface/transformers/pull/21134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21134.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21133
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21133/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21133/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21133/events
|
https://github.com/huggingface/transformers/pull/21133
| 1,534,714,330
|
PR_kwDOCUB6oc5HdG_g
| 21,133
|
[GIT] Fix training
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks, fixed now!"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR ensures that GIT can be properly fine-tuned. As GIT is a causal, GPT-like model that is also conditioned on CLIP-embedded image patches, one needs to only compute a loss on the predicted text tokens.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21133/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21133",
"html_url": "https://github.com/huggingface/transformers/pull/21133",
"diff_url": "https://github.com/huggingface/transformers/pull/21133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21133.patch",
"merged_at": 1673879859000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21132
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21132/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21132/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21132/events
|
https://github.com/huggingface/transformers/pull/21132
| 1,534,577,693
|
PR_kwDOCUB6oc5HcpoD
| 21,132
|
Fixing batching pipelines on single items for ChunkPipeline
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixing #20783
The issue was that the iterator would not be called because of the input type.
Regardless of the input type, `ChunkPipeline` will always potentially be iterating over
its inputs in order to use batching.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21132/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21132",
"html_url": "https://github.com/huggingface/transformers/pull/21132",
"diff_url": "https://github.com/huggingface/transformers/pull/21132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21132.patch",
"merged_at": 1673877868000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21131
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21131/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21131/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21131/events
|
https://github.com/huggingface/transformers/issues/21131
| 1,534,377,163
|
I_kwDOCUB6oc5bdLzL
| 21,131
|
could set max_length or max_seq_length bigger than 512 in NER?
|
{
"login": "ucas010",
"id": 50656998,
"node_id": "MDQ6VXNlcjUwNjU2OTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/50656998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ucas010",
"html_url": "https://github.com/ucas010",
"followers_url": "https://api.github.com/users/ucas010/followers",
"following_url": "https://api.github.com/users/ucas010/following{/other_user}",
"gists_url": "https://api.github.com/users/ucas010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ucas010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ucas010/subscriptions",
"organizations_url": "https://api.github.com/users/ucas010/orgs",
"repos_url": "https://api.github.com/users/ucas010/repos",
"events_url": "https://api.github.com/users/ucas010/events{/privacy}",
"received_events_url": "https://api.github.com/users/ucas010/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions, as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,677
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification
for fine-tune in NER
could I set --max_length or --max_seq_length bigger than 512 ??
File "/data/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 237, in forward
embeddings += position_embeddings
RuntimeError: The size of tensor a (2048) must match the size of tensor b (512) at non-singleton dimension 1
### Expected behavior
as I said before
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21131/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21130
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21130/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21130/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21130/events
|
https://github.com/huggingface/transformers/pull/21130
| 1,534,065,944
|
PR_kwDOCUB6oc5Ha6BA
| 21,130
|
Small simplification to TopKLogitsWarper
|
{
"login": "njhill",
"id": 16958488,
"node_id": "MDQ6VXNlcjE2OTU4NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/16958488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njhill",
"html_url": "https://github.com/njhill",
"followers_url": "https://api.github.com/users/njhill/followers",
"following_url": "https://api.github.com/users/njhill/following{/other_user}",
"gists_url": "https://api.github.com/users/njhill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/njhill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njhill/subscriptions",
"organizations_url": "https://api.github.com/users/njhill/orgs",
"repos_url": "https://api.github.com/users/njhill/repos",
"events_url": "https://api.github.com/users/njhill/events{/privacy}",
"received_events_url": "https://api.github.com/users/njhill/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @gante "
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
The max of `top_k` and `min_tokens_to_keep` performed on every call can just be done once up-front.
Apologies if there's some reason for it being this way that I overlooked!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] (N/A) Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] (N/A) Did you write any new necessary tests?
## Who can review?
@sgugger @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21130/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21130",
"html_url": "https://github.com/huggingface/transformers/pull/21130",
"diff_url": "https://github.com/huggingface/transformers/pull/21130.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21130.patch",
"merged_at": 1673964364000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21129
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21129/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21129/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21129/events
|
https://github.com/huggingface/transformers/issues/21129
| 1,533,939,165
|
I_kwDOCUB6oc5bbg3d
| 21,129
|
Error 429
|
{
"login": "milohpeng",
"id": 47114471,
"node_id": "MDQ6VXNlcjQ3MTE0NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/47114471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/milohpeng",
"html_url": "https://github.com/milohpeng",
"followers_url": "https://api.github.com/users/milohpeng/followers",
"following_url": "https://api.github.com/users/milohpeng/following{/other_user}",
"gists_url": "https://api.github.com/users/milohpeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/milohpeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/milohpeng/subscriptions",
"organizations_url": "https://api.github.com/users/milohpeng/orgs",
"repos_url": "https://api.github.com/users/milohpeng/repos",
"events_url": "https://api.github.com/users/milohpeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/milohpeng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @milohpeng the script you gave seems to run fine on my side, maybe it was an issue regarding your internet connection, can you try now again and check if it runs or not? Or run `transformers-cli env` in terminal and give your system info so we can reproduce the error.",
"Here is the information as requested, \r\n\r\n- `transformers` version: 4.25.1\r\n- Platform: Linux-5.4.56.bsk.10-amd64-x86_64-with-debian-10.12\r\n- Python version: 3.7.3\r\n- Huggingface_hub version: 0.11.1\r\n- PyTorch version (GPU?): 1.11.0+cu102 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\r\n\r\nI'm not sure whether my IP has been blacklisted...",
"@milohpeng I used the same environment as you and managed to get the results without any errors, maybe you tried to download the model so many times that they blacklisted your IP. \r\n\r\nI can't help you with this but maybe @sgugger can.",
"This should not consistently happen, as the error reflected (429) requires **a lot** of requests in a short amount of time. @milohpeng are you using a shared IP by any chance?",
"Hey @sgugger yes I am, I'm using my company's IP to access Huggingface. Is there anything I can do to reverse this as my colleagues seem to be affected as well? Thanks in advance! ",
"Could you provide us with the IP in question, so that I can investigate further with our infra team? You can email it to me (sylvain @ hf.co without the spaces) if you don't want it public. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @sgugger , I am encountering the same problem using different AWS SageMaker notebooks, since 24 hours.\r\n\r\nIP address: `34.236.*.*` (happy to send you the full IP, e.g. via email - I tried your above email, but got a \"Delivery Status Notification (Failure)\")\r\n\r\nCode to reproduce:\r\n```python\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\") \r\n\r\n# above raises error:\r\n# ...\r\n# OSError: There was a specific connection error when trying to load gpt2:\r\n# 429 Client Error: Too Many Requests for url: https://huggingface.co/gpt2/resolve/main/config.json\r\n```\r\n\r\nAnything I can do to overcome this problem? \r\n\r\nIt happens even if I run the above code just once. Maybe AWS uses the same IP for a lot of notebooks..",
"Hi @trianxy, sadly @sgugger is no longer at HF so the email in this thread is no longer active. \r\n\r\nPinging @philschmid who is the AWS Sagemaker master and will know more about this issue. ",
"@trianxy it seems like that the IP \"spammed\" the hub and you got rate limited. Did you ran some sort of loop to load stuff? The restriction should be lifted after 24h",
"Hey @philschmid , thanks for looking into this! \r\n\r\nNo, I didn't run a loop. I probably ran it at most 3 times before the problem appeared.\r\n\r\nAlso, I don't think the IP is unique to me. I guess AWS uses it for a lot of customer notebooks.\r\n\r\nAny chance having `34.*` IP addresses get a higher limit, since they seem to belong to AWS? \r\nAlso, is there anything else I can do to overcome this, other than waiting? (also asking for when it happens again in the future)",
"let me share that internally. ",
"@trianxy can you send the full IP at api-enterprise@huggingface.co? cc @XciD ",
"> @trianxy can you send the full IP at [api-enterprise@huggingface.co](mailto:api-enterprise@huggingface.co)? cc @XciD\r\n\r\nYes, I just did.",
"Hi @trianxy,\r\n\r\nSeems like the EC2 Ip you have was blacklisted in our waf. I've removed the restriction. Should be ready in a few minutes. \r\n\r\nCan you retry ?",
"Thanks @XciD , it works now!",
"Hello @philschmid, I am downloading the following dataset and after 20% of data was downloaded, I started getting the same error. How should I resolve it?\r\nThanks \r\n\r\nhttps://huggingface.co/datasets/cerebras/SlimPajama-627B "
] | 1,673
| 1,701
| 1,677
|
NONE
| null |
### System Info
transformers==4.25.1
Code block:
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("flax-community/roberta-base-thai")
model = AutoModelForMaskedLM.from_pretrained("flax-community/roberta-base-thai")
Error:
OSError: There was a specific connection error when trying to load flax-community/roberta-base-thai:
429 Client Error: Too Many Requests for url: https://huggingface.co/flax-community/roberta-base-thai/resolve/main/config.json
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Import package and run code.
### Expected behavior
No errors expected.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21129/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21128
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21128/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21128/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21128/events
|
https://github.com/huggingface/transformers/issues/21128
| 1,533,935,167
|
I_kwDOCUB6oc5bbf4_
| 21,128
|
TypeError: Descriptors cannot not be created directly. - protobuf version bug
|
{
"login": "off99555",
"id": 15215732,
"node_id": "MDQ6VXNlcjE1MjE1NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/off99555",
"html_url": "https://github.com/off99555",
"followers_url": "https://api.github.com/users/off99555/followers",
"following_url": "https://api.github.com/users/off99555/following{/other_user}",
"gists_url": "https://api.github.com/users/off99555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/off99555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/off99555/subscriptions",
"organizations_url": "https://api.github.com/users/off99555/orgs",
"repos_url": "https://api.github.com/users/off99555/repos",
"events_url": "https://api.github.com/users/off99555/events{/privacy}",
"received_events_url": "https://api.github.com/users/off99555/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @off99555 You might need to create a separate environment and then install transformers where you can have a different protobuf version than rest of your system.",
"@susnato but that is not considered trivial or acceptable, no? https://stackoverflow.com/a/6572017/2593810\r\nDo you mind sharing how you would do it in a way that is not so hacky?",
"@off99555 I would install Anaconda from [here](https://docs.anaconda.com/anaconda/install/windows/) and then I would create a new environment using `conda create --name venv python=3.9` in terminal and then activate it using `conda activate venv` , and finally I would install transformers there with the right protobuf version. (Installing anything in the venv will not interfere with you local python installation)\r\nBtw never install anything in your base environment(it is the environment that you get by default after installing anaconda).\r\n\r\nI hope it helps.",
"@susnato but I'm using the `google-cloud-documentai` package in the same project that uses `transformers` package. They both require different versions of `protobuf`. I cannot just install different versions of `protobuf` on different environments because I need both environments to be activated which would cause the version collision issue again.",
"Hi, @off99555 \r\nI checked and it's actually grpcio-status which is causing the problem, (grpcio-status==1.51.1 requires protobufprotobuf>=4.21.6), you can run `pip install grpcio-status==1.33.2 protobuf==3.19.6` to fix the issue. \r\nbtw, `google-cloud-documentai` will be version 2.7.0 then.\r\n\r\nlet me know if it worked or not.",
"I am not sure why you are opening the issue here. It is TensorFlow that has problem with this version of Transformers, not Transformers itself. The issue should be raised in the TensorFlow repository. We have just pinned protobuf to avoid this problem.",
"> I am not sure why you are opening the issue here. It is TensorFlow that has problem with this version of Transformers, not Transformers itself. The issue should be raised in the TensorFlow repository. We have just pinned protobuf to avoid this problem.\r\n\r\nIt's because I don't know exactly what is the cause. I was confused. All I know is that I run `import transformers` and it gave me the error so I open the issue here. \r\n\r\nThanks for pointing out that it's related to tensorflow. So the conflict is simply that `tensorflow` uses old version of `protobuf` whereas `grpcio-status` is using the latest `protobuf` version.\r\n\r\nSo I just downgraded `grpcio-status` according to what @susnato suggests and it seems to resolve the conflict. Thank you!",
"Just wanted to flag that maybe the protobuf version in this repo _should_ be updated – I'm trying to write a gRPC service and am currently getting around this issue by passing `use_fast=False` to my HF pipeline. I can't downgrade my `grpcio-tools` package seeing the last version to not support Protobuf v3 was released in 2020, and would prefer to not have my dependencies _that_ out-of-date. So I believe using such an old PB version will impact anyone trying to write microservices using HF, not just the transitive dependencies of other packages like Tensorflow.",
"@Nickersoft We sadly cannot remove the pin on `protobuf` until TensorFlow starts supporting version 4.0. Looks like TensorFlow 2.12 will solve this, so we just have to wait for it to be out.\r\n\r\nIn the meantime, you can always remove the pin in a source install.",
"@sgugger also wanted to flag that unpinning protobuf might not be sufficient to fully resolve this issue. [This](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/sentencepiece_model_pb2.py) file will need to be regenerated with newer version of protobuf as well (not sure if there are other such files in transformers code base).",
"cc @Narsil for this. Could you open a new issue specific to this @yigor and re-ping me and Narsil there? This way we'll be sure this does not slip through the crack.",
"The new sentence piece proto generated file is this : https://github.com/google/sentencepiece/blob/master/python/src/sentencepiece/sentencepiece_model_pb2.py",
"My specific version of this error only relates to the \"google/pegasus-cnn_dailymail\" model. The second fix suggested in the error message worked for me - I just added\r\n```\r\nimport os\r\nos.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"]=\"python\"\r\n```\r\nat the head of my Jupyter Notebook",
"You can risk in your base installment and downgrade that lib with:\r\npip install protobuf==3.20.*\r\n...... in Anaconda prompt, with admin privileges (if on Windows first disable win defender and any other anti-viruses)"
] | 1,673
| 1,692
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.8.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: I have GPU but I didn't run any code other than `import transformers`
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
To reproduce the behavior:
1. install latest version of `protobuf` using `pip install -U protobuf`. I have `4.21.12`
2. run `python -c "import transformers"`
3. you will see the following error
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\off99\anaconda3\lib\site-packages\transformers\__init__.py", line 30, in <module>
from . import dependency_versions_check
File "C:\Users\off99\anaconda3\lib\site-packages\transformers\dependency_versions_check.py", line 17, in <module>
from .utils.versions import require_version, require_version_core
File "C:\Users\off99\anaconda3\lib\site-packages\transformers\utils\__init__.py", line 34, in <module>
from .generic import (
File "C:\Users\off99\anaconda3\lib\site-packages\transformers\utils\generic.py", line 33, in <module>
import tensorflow as tf
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\__init__.py", line 37, in <module>
from tensorflow.python.tools import module_util as _module_util
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\python\__init__.py", line 37, in <module>
from tensorflow.python.eager import context
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\python\eager\context.py", line 29, in <module>
from tensorflow.core.framework import function_pb2
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\core\framework\function_pb2.py", line 16, in <module>
from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\core\framework\attr_value_pb2.py", line 16, in <module>
from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\core\framework\tensor_pb2.py", line 16, in <module>
from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\core\framework\resource_handle_pb2.py", line 16, in <module>
from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\core\framework\tensor_shape_pb2.py", line 36, in <module>
_descriptor.FieldDescriptor(
File "C:\Users\off99\anaconda3\lib\site-packages\google\protobuf\descriptor.py", line 560, in __new__
_message.Message._CheckCalledFromGeneratedFile()
-> <module 'google._upb._message' from 'C:\\Users\\off99\\anaconda3\\lib\\site-packages\\google\\_upb\\_message.cp38-win_amd64.pyd'...
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
```
I don't want to downgrade version as suggested by the error message because other packages depend on latest version of `protobuf` specifically `google-cloud-documentai`.
### Expected behavior
No error
Please suggest the best course of action.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21128/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21127
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21127/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21127/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21127/events
|
https://github.com/huggingface/transformers/issues/21127
| 1,533,449,427
|
I_kwDOCUB6oc5bZpTT
| 21,127
|
MT5ForConditionalGeneration: forward() got an unexpected keyword argument 'penalty_alpha'
|
{
"login": "Mahyar-Ali",
"id": 46643509,
"node_id": "MDQ6VXNlcjQ2NjQzNTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/46643509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mahyar-Ali",
"html_url": "https://github.com/Mahyar-Ali",
"followers_url": "https://api.github.com/users/Mahyar-Ali/followers",
"following_url": "https://api.github.com/users/Mahyar-Ali/following{/other_user}",
"gists_url": "https://api.github.com/users/Mahyar-Ali/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mahyar-Ali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mahyar-Ali/subscriptions",
"organizations_url": "https://api.github.com/users/Mahyar-Ali/orgs",
"repos_url": "https://api.github.com/users/Mahyar-Ali/repos",
"events_url": "https://api.github.com/users/Mahyar-Ali/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mahyar-Ali/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"Hi, @Mahyar-Ali \r\nI have transformers - 4.25.1 and used the code you gave (above) and it ran without any error, then I switched back to transformers - 4.16.2(same as your version) and it gave the error regarding \"penalty alpha\", I believe it is fixed in later versions of transformers, since I got the output in version 4.25.1 .\r\n\r\nMaybe you can try to upgrade to latest version and check if it works for you or not.",
"It's working now. Didn't pay attention to the version (as it was working well for `AutoModel`). Thanks!",
"@susnato thank you for pitching in ;) "
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
When I use the MT5ForConditionalGeneration class to load an mt5 model and then specify the `penalty_alpha` parameter in the generate function, the library raises the error `forward() got an unexpected keyword argument 'penalty_alpha'`. But when I load the same model with `AutoModelForSeq2SeqLM` class, it doesn't raise that issue.
This shouldn't be happening because the `AutoModel` class automatically selects the relevant `MT5ForConditionalGeneration` for all T5 models. So, why does this raises an issue when I use the `MT5ForConditionalGeneration` directly?
Also, this is particularly interesting because when you fine-tune a t5 model using the `MT5ForConditionalGeneration` class but load that model(after training) using the `AutoModelForSeq2SeqLM` and then use the `penalty_alpha`, it still raises the same issue.
Code:
```
model = MT5ForConditionalGeneration.from_pretrained(f"{model_dir}")
tokenizer = MT5Tokenizer.from_pretrained(f"{model_dir}")
input_ids = tokenizer.encode(
source_text, return_tensors="pt", add_special_tokens=True
)
input_ids = input_ids.to(device)
generated_ids = model.generate(
input_ids=input_ids,
penalty_alpha=0.6, top_k=4
)
preds = [
tokenizer.decode(
g,
skip_special_tokens=skip_special_tokens,
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
)
for g in generated_ids
]
```
Transformers version: 4.16.2
Python version: 3.9.15
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21127/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21126
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21126/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21126/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21126/events
|
https://github.com/huggingface/transformers/issues/21126
| 1,533,427,806
|
I_kwDOCUB6oc5bZkBe
| 21,126
|
CUDA out of memory for bart-large while using deepspeed with Zero stage 3
|
{
"login": "xpact",
"id": 2922269,
"node_id": "MDQ6VXNlcjI5MjIyNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2922269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xpact",
"html_url": "https://github.com/xpact",
"followers_url": "https://api.github.com/users/xpact/followers",
"following_url": "https://api.github.com/users/xpact/following{/other_user}",
"gists_url": "https://api.github.com/users/xpact/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xpact/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xpact/subscriptions",
"organizations_url": "https://api.github.com/users/xpact/orgs",
"repos_url": "https://api.github.com/users/xpact/repos",
"events_url": "https://api.github.com/users/xpact/events{/privacy}",
"received_events_url": "https://api.github.com/users/xpact/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Really, this is a question for the Deepspeed Issues since HF Trainer only provides the integration, but why do you expect that this is supposed to fit into 16GB? Colab barely has any cpu ram so very often there isn't really any free memory to offload to. \r\n\r\nBut let's figure out the math first and then it'd be easy to manage expectations.\r\n\r\nHow many params does this model have? e.g. if the checkpoint is saved in half precision - from `1.63 GB` model size it'd mean you'd need `1.63*9=14.67` i.e. 15GB just for the weights. If it were in fp32, than half of that and then it should fit w/o using zero at all.\r\n\r\nYou typically need `n_params * 18` just for the weights in mixed precision training. And then more for activations.\r\n\r\nAlso I don't see where you override the batch size - you should set it to `1` to start with and only increase that if you don't OOM.\r\n\r\np.s. to count the unique params:\r\n```\r\nsum(dict((p.data_ptr(), p.numel()) for p in model.parameters()).values())\r\n```",
"A bit more data. \r\nWhen using the standard GPU and high RAM environment before any training I see:\r\nGen RAM Free: 53.5 GB | Proc size: 93.4 MB\r\nGPU RAM Free: 15109MB | Used: 0MB | Util 0% | Total 15109MB\r\n\r\nWhen explicitly setting the batch size to 1 it works even on the smaller GPU (T4). Once the training starts it keeps the GPU RAM utilization at about 7GB from what I can see:\r\nGPU RAM Free: 8067MB | Used: 7042MB | Util 47% | Total 15109MB\r\n\r\nFor batch size 2:\r\nGPU RAM Free: 6407MB | Used: 8702MB | Util 58% | Total 15109MB\r\n\r\nFor batch size 4:\r\nGPU RAM Free: 1975MB | Used: 13134MB | Util 87% | Total 15109MB\r\n\r\nAt batch size 6 we are hitting the limits:\r\nGPU RAM Free: 157MB | Used: 14952MB | Util 99% | Total 15109MB\r\n\r\nSo, when using auto for batch size, deepspeed determines optimal batch size to be 8 and runs out of CUDA memory. \r\nThis is definitely not a transformer issue. \r\n\r\nI guess lesson learned is to ramp up the batch size as opposed to assuming that deepspeed will calculate the optimal size.",
"I'm glad to see you sorted it out, @xpact \r\n\r\n> So, when using auto for batch size, deepspeed determines optimal batch size to be 8 and runs out of CUDA memory.\r\n\r\nThe `auto` values in the ds config file are used differently depending on their key. their main purpose is to avoid situations where the command line args and ds_config mismatch, so basically they are just substituted with the command line args.\r\n\r\nOnly `auto`'s from `zero_optimization` section is actually \"optimized\" to the model size - this is the only exception.\r\n\r\nEach `auto` key is documented here: https://huggingface.co/docs/transformers/main/main_classes/deepspeed#deepspeed-trainer-integration and this is exclusively HF Trainer feature (and of Accelerate) - i.e. deepspeed has no idea what to do with `auto` values.\r\n\r\n-------------------\r\n\r\nIn general the batch size is typically one of the most importants hparam - and you always want to define it explicitly and not rely on any defaults.",
"You can also add `skip_memory_metrics=False` to your training args and it'll print you the full memory usage stats at the end of each run. "
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: GPU 0: Tesla T4
- Using distributed or parallel set-up in script?: no
### Who can help?
@stas00
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Trying to run the following in colab [link](https://colab.research.google.com/drive/1zHogV6VnqGPV5LoQoDzTy9D2OafYpWkz?usp=sharing)
It is pretty stock example of running summarization training with bart-large-cnn with deepspeed turned. I was hoping to train t5-3b on A100 but running into issues with smaller model on T4.
### Expected behavior
Running this on a Tesla T4 with 16 GB of GPU RAM one would expect that bart-large-cnn would work with Zero stage 3 enabled. Instead I get OutOfMemoryError: CUDA out of memory.
Zero stage 3 seems to be enabled per following output in the log file:
...
[2023-01-14 18:34:57,363] [INFO] [config.py:1024:print] zero_enabled ................. True
[2023-01-14 18:34:57,364] [INFO] [config.py:1024:print] zero_optimization_stage ...... 3
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21126/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21125
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21125/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21125/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21125/events
|
https://github.com/huggingface/transformers/pull/21125
| 1,533,244,913
|
PR_kwDOCUB6oc5HYROo
| 21,125
|
Use raw string for regex in tokenization_t5_fast.py
|
{
"login": "odashi",
"id": 1023695,
"node_id": "MDQ6VXNlcjEwMjM2OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1023695?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odashi",
"html_url": "https://github.com/odashi",
"followers_url": "https://api.github.com/users/odashi/followers",
"following_url": "https://api.github.com/users/odashi/following{/other_user}",
"gists_url": "https://api.github.com/users/odashi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odashi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odashi/subscriptions",
"organizations_url": "https://api.github.com/users/odashi/orgs",
"repos_url": "https://api.github.com/users/odashi/repos",
"events_url": "https://api.github.com/users/odashi/events{/privacy}",
"received_events_url": "https://api.github.com/users/odashi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
This change replaces the regex pattern written in a Unicode string to a raw string, to suppress `DeprecationWarning`/`SyntaxError` around the pattern.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger and @raghavanone who organized the original code.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21125/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21125",
"html_url": "https://github.com/huggingface/transformers/pull/21125",
"diff_url": "https://github.com/huggingface/transformers/pull/21125.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21125.patch",
"merged_at": 1673776591000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21124
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21124/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21124/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21124/events
|
https://github.com/huggingface/transformers/pull/21124
| 1,533,211,941
|
PR_kwDOCUB6oc5HYK9s
| 21,124
|
[LongT5] Remove duplicate encoder_attention_mask default value check
|
{
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR performs a minor code clean-up by removing duplicate code in the LongT5 stack.
Currently, if the stack is a decoder block with `encoder_hidden_states` provided, the check for the existence of `encoder_attention_mask` is done twice:
1. https://github.com/huggingface/transformers/blob/c8f35a9ce37bd03f37fcf8336172bdcbe7ffc86a/src/transformers/models/longt5/modeling_longt5.py#L1452
2. https://github.com/huggingface/transformers/blob/c8f35a9ce37bd03f37fcf8336172bdcbe7ffc86a/src/transformers/models/longt5/modeling_longt5.py#L1479
Because the conditions on the second check are identical to that of the first check (`self.is_decoder is True`, `encoder_attention_mask is None`, `encoder_hidden_states is not None`), the second check will never be True since `encoder_hidden_states` was updated during the second check.
This PR proposes removing the first check and only use the second when the extended encoder attention mask is set anyway.
## Who can review?
@ArthurZucker , @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21124/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21124/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21124",
"html_url": "https://github.com/huggingface/transformers/pull/21124",
"diff_url": "https://github.com/huggingface/transformers/pull/21124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21124.patch",
"merged_at": 1673875617000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21123
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21123/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21123/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21123/events
|
https://github.com/huggingface/transformers/issues/21123
| 1,533,207,801
|
I_kwDOCUB6oc5bYuT5
| 21,123
|
Ernie-M
|
{
"login": "KnutJaegersberg",
"id": 17965169,
"node_id": "MDQ6VXNlcjE3OTY1MTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/17965169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KnutJaegersberg",
"html_url": "https://github.com/KnutJaegersberg",
"followers_url": "https://api.github.com/users/KnutJaegersberg/followers",
"following_url": "https://api.github.com/users/KnutJaegersberg/following{/other_user}",
"gists_url": "https://api.github.com/users/KnutJaegersberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KnutJaegersberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KnutJaegersberg/subscriptions",
"organizations_url": "https://api.github.com/users/KnutJaegersberg/orgs",
"repos_url": "https://api.github.com/users/KnutJaegersberg/repos",
"events_url": "https://api.github.com/users/KnutJaegersberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/KnutJaegersberg/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"https://github.com/PaddlePaddle/ERNIE/blob/ernie-kit-open-v1.0/erniekit/modules/ernie.py\r\n\r\nhas more implementation details.",
"Hi, @shermansiu is there any pytorch/tf implementation of this model?",
"None that I'm aware of.\r\n\r\nAnyways, the author of [ERNIE-Pytorch](https://github.com/nghuyong/ERNIE-Pytorch) ported over a few other Ernie models to Huggingface. I'm sure it could be adapted for this. And the PaddlePaddle syntax is quite similar to that of PyTorch, so I'm sure it should be relatively easy, though it'll probably take some time.",
"@shermansiu Thanks for the resources!\r\nI am currently trying to port the model to huggingface(pytorch), (done till Embedding Layer with acceptable tolerance of 1e-3)",
"Hi @KnutJaegersberg, Ernie-M is implemented!"
] | 1,673
| 1,676
| 1,676
|
NONE
| null |
### Model description
Ernie-M looks pretty good in multilingual benchmarks, beating XLM-Roberta.
Paddlepaddle recently added ernie-m to the huggingface repo, we can use it with paddlenlp.transformers.
Would be nice to have the model supported in huggingface transformer as well.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://huggingface.co/PaddlePaddle/ernie-m-base
https://huggingface.co/PaddlePaddle/ernie-m-large
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21123/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21122
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21122/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21122/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21122/events
|
https://github.com/huggingface/transformers/issues/21122
| 1,533,185,765
|
I_kwDOCUB6oc5bYo7l
| 21,122
|
FELIX: Flexible Text Editing Through Tagging and Insertion
|
{
"login": "shermansiu",
"id": 12627125,
"node_id": "MDQ6VXNlcjEyNjI3MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shermansiu",
"html_url": "https://github.com/shermansiu",
"followers_url": "https://api.github.com/users/shermansiu/followers",
"following_url": "https://api.github.com/users/shermansiu/following{/other_user}",
"gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions",
"organizations_url": "https://api.github.com/users/shermansiu/orgs",
"repos_url": "https://api.github.com/users/shermansiu/repos",
"events_url": "https://api.github.com/users/shermansiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shermansiu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Because FELIX can be applied to any encoder-only model, perhaps what is needed is an `AutoModelForSequenceEditing`/`<ModelName>ModelForSequenceEditing`?",
"Closing this as it's a duplicate of #11632"
] | 1,673
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
### Model description
FELIX is an encoder-only text editing model, which allows for faster editing and summarization than sequence-to-sequence models, because the summarization can be computed in parallel instead of autoregressively.
- [Blog](https://ai.googleblog.com/2021/05/introducing-felix-flexible-text-editing.html?hl=hr&m=1)
- [Paper](https://aclanthology.org/2020.findings-emnlp.111/)
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
https://github.com/google-research/google-research/tree/master/felix
No weights are available, but code to train it is available. A component of FELIX is BERT, so training FELIX is a matter of fine-tuning a pre-trained BERT model.
@Jmallins
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21122/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21121
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21121/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21121/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21121/events
|
https://github.com/huggingface/transformers/pull/21121
| 1,533,174,229
|
PR_kwDOCUB6oc5HYD9Q
| 21,121
|
Add Epsilon- and Eta-Sampling
|
{
"login": "shermansiu",
"id": 12627125,
"node_id": "MDQ6VXNlcjEyNjI3MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shermansiu",
"html_url": "https://github.com/shermansiu",
"followers_url": "https://api.github.com/users/shermansiu/followers",
"following_url": "https://api.github.com/users/shermansiu/following{/other_user}",
"gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions",
"organizations_url": "https://api.github.com/users/shermansiu/orgs",
"repos_url": "https://api.github.com/users/shermansiu/repos",
"events_url": "https://api.github.com/users/shermansiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shermansiu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21121). All of your documentation changes will be reflected on that endpoint.",
"@gante I incorporated your changes! I rebased with `main` because there were some merge conflicts, which caused some of the review snippets above to get \"outdated,\" but it's done!",
"@sgugger this PR adds a new sampling-based generation strategy that can be effectively implemented through logits processors",
"Yeah, no problem!😊",
"@shermansiu now step 4 remains :) Would you like to work on it? (we can retweet and share your posts)",
"I'm interested, but it feels like it's best added to https://github.com/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb than in its own notebook, as other truncation sampling approaches (top-p and top-k) are already there.",
"We don't update the notebooks for our blog posts, so they stay consistent with their content :) But we will certainly update our [new docs](https://huggingface.co/docs/transformers/main/en/generation_strategies)\r\n\r\nThe new docs only contain simple examples (for now), so the benefits of the new generation strategy should be showcased in a notebook. In the near future, we will have an advanced text generation doc with clear examples for each flag (but it won't be ready in the next 2-3 months)",
"I have a lot on my to-do list right now... Although I'd love to contribute to the notebook, I think it's unrealistic for me to be able to put something out soon.",
"Yesterday's ACL 2023 tutorial on \"Generating Text from Large Language Models\" covers eta-sampling and more! John Hewitt, the first author of the eta-sampling paper, was one of the presenters for that tutorial!\r\n\r\nSite: https://rycolab.io/classes/acl-2023-tutorial/\r\nSlides: https://drive.google.com/file/d/1UHbGcjzBURG1n2DufC7iDTmGNjIz5Dp_/view"
] | 1,673
| 1,689
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Implements epsilon- and eta-sampling, as seen in [[Truncation Sampling as Language Model Desmoothing](https://arxiv.org/abs/2210.15191)](https://arxiv.org/abs/2210.15191). The code is adapted from the author's official repository [here](https://github.com/john-hewitt/truncation-sampling).
Resolves #21092.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21121/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21121",
"html_url": "https://github.com/huggingface/transformers/pull/21121",
"diff_url": "https://github.com/huggingface/transformers/pull/21121.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21121.patch",
"merged_at": 1673978672000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21120
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21120/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21120/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21120/events
|
https://github.com/huggingface/transformers/issues/21120
| 1,533,162,500
|
I_kwDOCUB6oc5bYjQE
| 21,120
|
`PreTrainedTokenizer` (slow) strip tokens that are around `unique_no_split_tokens`
|
{
"login": "Gompyn",
"id": 29288660,
"node_id": "MDQ6VXNlcjI5Mjg4NjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/29288660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gompyn",
"html_url": "https://github.com/Gompyn",
"followers_url": "https://api.github.com/users/Gompyn/followers",
"following_url": "https://api.github.com/users/Gompyn/following{/other_user}",
"gists_url": "https://api.github.com/users/Gompyn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gompyn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gompyn/subscriptions",
"organizations_url": "https://api.github.com/users/Gompyn/orgs",
"repos_url": "https://api.github.com/users/Gompyn/repos",
"events_url": "https://api.github.com/users/Gompyn/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gompyn/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This is probably due to the following line, which is still not fixed in the HEAD.\r\n\r\nhttps://github.com/huggingface/transformers/blob/f58248b8240571bbbb0918ddd36cc3fdf061df11/src/transformers/tokenization_utils.py#L532-L537",
"This bug strips away `\\n` around my special token, making my model believe that there is no newline in my text.",
"@ArthurZucker I can pick up this, Let me know what should be possible fix ? \r\n",
"There is indeed a discrepancy between the `fast` and `slow` version. \r\nThe problem here is that the tokens are indeed part of the `no_split_tokens`, but they are not `AddedToken`.\r\nI am not really sure if the `fast` or `slow` has the correct behavior 😅 \r\n\r\n",
"The cleanest way is to have the tokens as `AddedTokens` because you can handle the `rstrip` and `lstripe` arguments",
"@ArthurZucker I think `decode(encode(text)) == text` should be true by default, because some use cases (e.g. code generation) require the correct formatting of text. \"Automatic formatting\" should not be done by default to avoid breaking such use cases.\r\nFrom another point of view, I guess most pre-trained models use a fast tokenizer (as the name `fast` implies), so these models also expect the behavior of the `fast` version.",
"> I think decode(encode(text)) == text should be true by default\r\n\r\nThis is untrue for pretty much all tokenizers, since tokenization is a destructive operation. At the very least you get back the normalized text (with some minimal unicode clean up) but for some tokenizers like BERT you will have whitespace simplified or text lowercased.",
"> > I think decode(encode(text)) == text should be true by default\r\n> \r\n> This is untrue for pretty much all tokenizers, since tokenization is a destructive operation. At the very least you get back the normalized text (with some minimal unicode clean up) but for some tokenizers like BERT you will have whitespace simplified or text lowercased.\r\n\r\nI agree that minimal unicode clean up is acceptable (mostly because that does not break my use cases), but whitespace simplification or text lowercasing is not by default enabled, so by default users do get a mostly conservative tokenizer.\r\nBut to add new tokens, the most simple way (`add_tokens('mytoken')` with `special_tokens=False` by default) in a slow tokenizer accidentally (from the view of a user) breaks this conservative behavior, and I think this is unexpected by users.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Is there any progress on this issue? @ArthurZucker ",
"Not yet! I finally have time so this week should be good! ",
"Is there any progress on this issue?",
"Hey, to follow progress is suggest you check #23909, which should try to adresse this. ",
"Quick update, this is gonna take a bit more time as a more in-depth refactoring is needed",
"PR will be merged this week! 🤗 "
] | 1,673
| 1,695
| 1,695
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. load a `PreTrainedTokenizer` that contains `unique_no_split_tokens`, e.g. `EleutherAI/gpt-j-6B`.
```python
tokenizer = transformers.GPT2Tokenizer.from_pretrained('EleutherAI/gpt-j-6B')
```
2. use the tokenizer to split a string that contains a `unique_no_split_tokens`, e.g. `" <|extratoken_1|> "`.
```python
print(tokenizer(" <|extratoken_1|> ").input_ids)
```
### Expected behavior
The tokenizer splits the string into 3 tokens (`" "`, `"<|extratoken_1|>"` and `" "`), and gives their ids (`[220, 50257, 220]`). This is the behavior of `PreTrainedTokenizerFast`.
But the actual behavior is that the `PreTrainedTokenizer` only gives the id of `"<|extratoken_1|>"`, i.e. `50257`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21120/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21119
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21119/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21119/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21119/events
|
https://github.com/huggingface/transformers/issues/21119
| 1,533,085,868
|
I_kwDOCUB6oc5bYQis
| 21,119
|
GPT2 tokenizer decode swallows space
|
{
"login": "daniel-ziegler",
"id": 1620154,
"node_id": "MDQ6VXNlcjE2MjAxNTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1620154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daniel-ziegler",
"html_url": "https://github.com/daniel-ziegler",
"followers_url": "https://api.github.com/users/daniel-ziegler/followers",
"following_url": "https://api.github.com/users/daniel-ziegler/following{/other_user}",
"gists_url": "https://api.github.com/users/daniel-ziegler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daniel-ziegler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daniel-ziegler/subscriptions",
"organizations_url": "https://api.github.com/users/daniel-ziegler/orgs",
"repos_url": "https://api.github.com/users/daniel-ziegler/repos",
"events_url": "https://api.github.com/users/daniel-ziegler/events{/privacy}",
"received_events_url": "https://api.github.com/users/daniel-ziegler/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @daniel-ziegler use `clean_up_tokenization_spaces=False` in `decode` so the code will be - \r\n`print(tokenizer.decode(tokens, clean_up_tokenization_spaces=False))` \r\nthis solves the problem,\r\n\r\nthanks, \r\nsusnato.",
"Indeed! Thanks for jumping this quickly @susnato 🤗",
"Thanks, that does work. I still think it's a bug that that isn't the default behavior for reversible tokenizers like GPT2's -- there's only one standard decoding behavior for them and it should be the default.",
"@daniel-ziegler, I think it's due to the reason that most tokenizers don't preserve the structure such as spaces, and the huggingface team didn't want to have different implementations for both type of tokenizers (which will make the code more complecated!), so it's True by default.",
"@ArthurZucker When LLMs are used to generate code, keeping `clean_up_tokenization_spaces=True` can often \r\nlead to uncompilable code being generated because of the [clean_up_tokenization()](https://github.com/huggingface/transformers/blob/c48787f347bd604f656c2cfff730e029c8f8c1fe/src/transformers/tokenization_utils_base.py#L3798) method. I wanted to know if there are any pitfalls in using `clean_up_tokenization_spaces=False` ?",
"I have not conducted experiments but overall would just not recommend using GPT2 model and rather Llama. 🤗 ",
"@ArthurZucker Will the same issue not persist in Llama as well? If the next token to be generated is between two quotes, it can still give an error when `clean_up_tokenization_spaces=True`. (`string var1 = \"abc\"` vs `string var1 = \"abc \"`)",
"No it's not the same tokenizer so I don't think it will have the same behaviour "
] | 1,673
| 1,704
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu)
- Jax version: 0.4.1
- JaxLib version: 0.4.1
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokens = tokenizer.encode("y is ?")
print(tokens) # prints [88, 318, 5633]
print(tokenizer.decode(tokens)) # prints "y is?" (wrong!)
```
### Expected behavior
It should roundtrip back to the same string, matching the behavior of OpenAI's reference implementation: https://github.com/openai/gpt-2/blob/master/src/encoder.py
OpenAI's implementation encodes to the same tokens, but correctly decodes them to `"y is ?"`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21119/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21118
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21118/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21118/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21118/events
|
https://github.com/huggingface/transformers/issues/21118
| 1,532,936,901
|
I_kwDOCUB6oc5bXsLF
| 21,118
|
Issue with ESM and DDP due to unused positional embeddings when rotary embeddings specified
|
{
"login": "simonlevine",
"id": 50503513,
"node_id": "MDQ6VXNlcjUwNTAzNTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/50503513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonlevine",
"html_url": "https://github.com/simonlevine",
"followers_url": "https://api.github.com/users/simonlevine/followers",
"following_url": "https://api.github.com/users/simonlevine/following{/other_user}",
"gists_url": "https://api.github.com/users/simonlevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonlevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonlevine/subscriptions",
"organizations_url": "https://api.github.com/users/simonlevine/orgs",
"repos_url": "https://api.github.com/users/simonlevine/repos",
"events_url": "https://api.github.com/users/simonlevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonlevine/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Note that you can avoid any error by passing `find_unused_parameters=True` when wrapping your model in DDP.",
"This is part of the model design that I believe is in the original repo as well, but if it's still causing issues with @sgugger's fix, we can look into making a PR to remove the base positional embeddings when the model is using rotary embeddings instead!\r\n\r\nIf it's working for you with that fix, though, feel free to close the issue.",
"Hi @Rocketknight1, it would fix the issue but due to other features of the particular model (ESM within a larger module) in use, the flag results in `RuntimeError: Expected to mark a variable ready only once`. At any rate, it turned out this portion of ESM was the only problematic portion, but feel free to close the issue since that find unused params fix will (probably) be sufficient for most users.",
"Alright, I'll do that for now - but if anyone comes across this issue and `find_unused_parameters=True` is not helping, feel free to reopen and comment here!"
] | 1,673
| 1,674
| 1,674
|
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-136-generic-x86_64-with-glibc2.31
- Python version: 3.10.4
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, 1 GPU
- Using distributed or parallel set-up in script?: Yes, DDP
### Who can help?
@ArthurZucker @Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Using class `ESMModel` results in an unused parameter creation. In the DDP setting, this causes errors as some parameters don't receive grad. Within `EsmEmbeddings` , self.positional_embeddings should not be instantiated as an `nn.Embedding` if `config. position_embedding_type` is not `absolute`.
### Expected behavior
We would expect instantiaton of `EsmEmbeddings` to not create the unused `nn.Embedding` module as this can create cryptic errors later due to unused parameters.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21118/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21117
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21117/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21117/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21117/events
|
https://github.com/huggingface/transformers/issues/21117
| 1,532,886,242
|
I_kwDOCUB6oc5bXfzi
| 21,117
|
Problem running a project with transformers
|
{
"login": "jooray",
"id": 1028688,
"node_id": "MDQ6VXNlcjEwMjg2ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1028688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jooray",
"html_url": "https://github.com/jooray",
"followers_url": "https://api.github.com/users/jooray/followers",
"following_url": "https://api.github.com/users/jooray/following{/other_user}",
"gists_url": "https://api.github.com/users/jooray/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jooray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jooray/subscriptions",
"organizations_url": "https://api.github.com/users/jooray/orgs",
"repos_url": "https://api.github.com/users/jooray/repos",
"events_url": "https://api.github.com/users/jooray/events{/privacy}",
"received_events_url": "https://api.github.com/users/jooray/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I am unable to reproduce on my side, even in a new environment an installing numpy 1.24.1. Could you maybe try a full re-install in a fresh environment?",
"It is a system-wide Python install, I currently don't have time to reinstall the whole system.\r\n\r\nI hot-fixed it by removing numpy from transformers/dependency_versions_check.py, I changed pkgs_to_check_at_runtime to not include numpy.\r\n\r\n```python\r\npkgs_to_check_at_runtime = \"python tqdm regex requests packaging filelock tokenizers\".split()\r\n```\r\n\r\nAfter that, everything works (including, obviously, numpy), so it is really just the check that is broken.\r\n\r\nSharing for others who might stumble on the issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am having the same issue, and have reproduced the steps above.\r\nDoing the hot-fix of removing Numpy from the dependency checks seems to have worked 👍 ",
"Maybe check your `site-packages/` directory first? This issue has also happened to me because both `numpy-1.24.2.dist-info/` and `numpy-1.24.2-py3.10.egg-info/` exist in `site-packages/`, and `numpy-1.24.2-py3.10.egg-info/` is an empty directory. I assume this mainly when `importlib_metadata` happens to seek metadata in a different directory, and the directory relative happens to be empty, it might return `None`. You can try to check your directory by this\r\n\r\n```\r\n$ ls /opt/homebrew/lib/python3.10/site-packages/ | grep numpy\r\n```\r\n\r\nIn addition, you may use `rm -rf` to delete the empty one or the distracting one.",
"> Maybe check your `site-packages/` directory first? This issue has also happened to me because both `numpy-1.24.2.dist-info/` and `numpy-1.24.2-py3.10.egg-info/` exist in `site-packages/`, and `numpy-1.24.2-py3.10.egg-info/` is an empty directory. I assume this mainly when `importlib_metadata` happens to seek metadata in a different directory, and the directory relative happens to be empty, it might return `None`. You can try to check your directory by this\r\n> \r\n> ```\r\n> $ ls /opt/homebrew/lib/python3.10/site-packages/ | grep numpy\r\n> ```\r\n> \r\n> In addition, you may use `rm -rf` to delete the empty one or the distracting one.\r\n\r\ndeleting empty numpy-1.26.2-py3.10.egg-xxx directory resolves this issue for my MacBook M1 and transformers(huggingface) ",
"> > Maybe check your `site-packages/` directory first? This issue has also happened to me because both `numpy-1.24.2.dist-info/` and `numpy-1.24.2-py3.10.egg-info/` exist in `site-packages/`, and `numpy-1.24.2-py3.10.egg-info/` is an empty directory. I assume this mainly when `importlib_metadata` happens to seek metadata in a different directory, and the directory relative happens to be empty, it might return `None`. You can try to check your directory by this\r\n> > ```\r\n> > $ ls /opt/homebrew/lib/python3.10/site-packages/ | grep numpy\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > In addition, you may use `rm -rf` to delete the empty one or the distracting one.\r\n> \r\n> deleting empty numpy-1.26.2-py3.10.egg-xxx directory resolves this issue for my MacBook M1 and transformers(huggingface)\r\n\r\nThis also solved my issue. Thanks 🙌🏽"
] | 1,673
| 1,702
| 1,677
|
NONE
| null |
### System Info
I am trying to run a project that uses transformers (whisper), I get this error message:
```
ValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.
```
I have tried:
```
pip3 install numpy
pip3 install -I transformers --no-cache-dir --force-reinstall
```
When I run the same python interpreter (pip3 and python3 are from the same interpreter) and import numpy, it is successful:
```
$ head -1 `which whisper`
#!/usr/local/opt/python@3.10/bin/python3.10
$ /usr/local/opt/python@3.10/bin/python3.10
Python 3.10.9 (main, Dec 15 2022, 18:25:35) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import numpy.version as version
>>> version.version
'1.24.1'
```
I was trying to dig deeper and looked at transformers/utils/versions.py. I looked at
```python
got_ver = importlib_metadata.version(pkg)
```
And got_ver indeed returns None, even though I can use numpy:
```
Python 3.10.9 (main, Dec 15 2022, 18:25:35) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import importlib.metadata as importlib_metadata
>>> numpy.version.version
'1.24.1'
>>> print(importlib_metadata.version("numpy"))
None
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
Python 3.10.9 (main, Dec 15 2022, 18:25:35) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import importlib.metadata as importlib_metadata
>>> numpy.version.version
'1.24.1'
>>> print(importlib_metadata.version("numpy"))
None
```
### Expected behavior
The last line would not return None and the transformer library would not return this:
```
>>> import transformers
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/site-packages/transformers/__init__.py", line 30, in <module>
from . import dependency_versions_check
File "/usr/local/lib/python3.10/site-packages/transformers/dependency_versions_check.py", line 41, in <module>
require_version_core(deps[pkg])
File "/usr/local/lib/python3.10/site-packages/transformers/utils/versions.py", line 123, in require_version_core
return require_version(requirement, hint)
File "/usr/local/lib/python3.10/site-packages/transformers/utils/versions.py", line 117, in require_version
_compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
File "/usr/local/lib/python3.10/site-packages/transformers/utils/versions.py", line 45, in _compare_versions
raise ValueError(
ValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21117/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21116
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21116/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21116/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21116/events
|
https://github.com/huggingface/transformers/issues/21116
| 1,532,859,735
|
I_kwDOCUB6oc5bXZVX
| 21,116
|
PushToHubCallback is hanging on the training completion
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"`PushToHubCallback(..., save_strategy=\"no\")` seems to fix the hanging problem but in that case the model will only be saved at the end of the training...",
"Investigating this now - PushToHubCallback shouldn't be hanging like this.",
"I tried running this locally and it ran successfully for me - is it possible that model uploading from Colab was just slow, and you didn't give it time to complete? In some cases the progress bar on the final upload doesn't display, though I have some ideas on how to fix that!",
"Hi @Rocketknight1 , thanks for looking into the issue! When I reported this, I gave it about 40-45 minutes after it finished training. At that point, nothing was uploaded to Hub and the notebook cell just kept spinning with no end and I interrupted the execution. However, I have tried to reproduce it again just now, and… it worked. Checkpoints uploaded during training, and final model took less than 2 minutes to upload. I have not modified anything in the code. No clue why it wasn’t working before, and why it does now. \r\nMy only guess is that perhaps there was some sort of connection issue. Could it be that the callback was not able to reach Hub at that moment and kept trying? ",
"As a piece of anecdotal evidence - in case it's useful - I tried running @MKhalusova example a few days ago (added full script below), and also had issues with hanging. It ran seemlessly today - no idea what changed 🤷♀️ \r\n\r\nTwo notes: \r\n* [The model card only](https://huggingface.co/amyeroberts/my_food_classifier) shows performance for epoch 0, even though the save strategy is \"epoch\". I'm guessing this is because of [not starting an upload job whilst another is happening](https://github.com/huggingface/transformers/blob/91c2278b97a16e7dcde28fd0fce72969560f587b/src/transformers/keras_callbacks.py#L374)? \r\n* I've never seen a progress bar for upload either at the end of an epoch or at the end of training using `PushToHubCallback` \r\n\r\n```python\r\nimport numpy as np\r\nimport tensorflow as tf\r\n\r\nimport evaluate\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoImageProcessor, DefaultDataCollator, TFAutoModelForImageClassification, create_optimizer\r\nfrom transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback\r\n\r\nBATCH_SIZE = 16\r\nNUM_BATCHES = 5\r\nNUM_EPOCHS = 5\r\nLEARNING_RATE = 3e-5\r\nWEIGHT_DECAY_RATE = 0.01\r\nSEED = 42\r\n\r\n# Load in the dataset\r\nfood = load_dataset(\"food101\", split=\"train[:5000]\")\r\nfood = food.train_test_split(test_size=0.2)\r\n\r\nlabels = food[\"train\"].features[\"label\"].names\r\nlabel2id, id2label = dict(), dict()\r\nfor i, label in enumerate(labels):\r\n label2id[label] = str(i)\r\n id2label[str(i)] = label\r\n\r\ncheckpoint = \"google/vit-base-patch16-224-in21k\"\r\nimage_processor = AutoImageProcessor.from_pretrained(checkpoint)\r\naccuracy = evaluate.load(\"accuracy\")\r\n\r\ndef process(examples):\r\n examples.update(image_processor(examples['image'], ))\r\n return examples\r\n\r\ndef compute_metrics(eval_pred):\r\n predictions, labels = eval_pred\r\n predictions = np.argmax(predictions, axis=1)\r\n return accuracy.compute(predictions=predictions, references=labels)\r\n\r\nfood = food.map(process, batched=True).shuffle(seed=SEED)\r\ndata_collator = DefaultDataCollator(return_tensors=\"tf\")\r\nmodel = TFAutoModelForImageClassification.from_pretrained(\r\n checkpoint,\r\n id2label=id2label,\r\n label2id=label2id,\r\n)\r\ntf_train_dataset = food[\"train\"].select(range(BATCH_SIZE * NUM_BATCHES)).to_tf_dataset(\r\n columns=['pixel_values'],\r\n label_cols=[\"label\"],\r\n shuffle=True,\r\n batch_size=BATCH_SIZE,\r\n collate_fn=data_collator\r\n)\r\ntf_eval_dataset = food[\"test\"].select(range(BATCH_SIZE * NUM_BATCHES)).to_tf_dataset(\r\n columns=['pixel_values'],\r\n label_cols=[\"label\"],\r\n shuffle=True,\r\n batch_size=BATCH_SIZE,\r\n collate_fn=data_collator\r\n)\r\noptimizer, lr_schedule = create_optimizer(\r\n init_lr=LEARNING_RATE,\r\n num_train_steps=len(food[\"train\"]) * NUM_EPOCHS,\r\n weight_decay_rate=WEIGHT_DECAY_RATE,\r\n num_warmup_steps=0,\r\n)\r\nloss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\r\nmodel.compile(optimizer=optimizer, loss=loss)\r\nmetric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset)\r\npush_to_hub_callback = PushToHubCallback(\r\n output_dir=\"amyeroberts/my_food_classifier\",\r\n tokenizer=image_processor,\r\n)\r\nmodel.fit(\r\n tf_train_dataset,\r\n validation_data=tf_eval_dataset,\r\n epochs=NUM_EPOCHS,\r\n callbacks=[metric_callback, push_to_hub_callback]\r\n)\r\n```",
"@amyeroberts We generally avoid displaying progress bars from that callback, because the upload of each checkpoint runs in the background while the next epoch is training. As a result, the callback progress bar would run into the Keras progress bar and cause chaos in the console output.\r\n\r\nHowever, I think the callback was supposed to display a progress bar for the final upload after training is finished, when there's no risk of running into the Keras bars. This is also the only upload that will actually cause any delays. I'll put that on my list to investigate!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### System Info
Adding a PushToHubCallback callback when training a TF model in a Jupyter notebook results in a cell hanging upon training completion. Nothing is pushed to Hub. Here's the callback:
```
push_to_hub_callback = PushToHubCallback(
output_dir="my_food_classifier",
tokenizer=image_processor,
)
callbacks = [metric_callback, push_to_hub_callback]
model.fit(
tf_train_dataset,
validation_data=tf_eval_dataset,
epochs=num_epochs,
callbacks=callbacks
)
```
A Jupyter notebook where this can be reproduced is linked below, however, I'm getting the same result, when running this as a script, not in a notebook environment.
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Rocketknight1 @amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1OlFUm41Tqfz4v4oHJ4XzJEpSwmlGq9z3#scrollTo=daKTh8apJHU_
### Expected behavior
I would expect that the callback would save and push the model to the Hub once per epoch, and, possibly, upon training completion.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21116/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21115
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21115/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21115/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21115/events
|
https://github.com/huggingface/transformers/pull/21115
| 1,532,801,246
|
PR_kwDOCUB6oc5HW0ox
| 21,115
|
Fixed typo in docstring
|
{
"login": "tkburis",
"id": 20501289,
"node_id": "MDQ6VXNlcjIwNTAxMjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/20501289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tkburis",
"html_url": "https://github.com/tkburis",
"followers_url": "https://api.github.com/users/tkburis/followers",
"following_url": "https://api.github.com/users/tkburis/following{/other_user}",
"gists_url": "https://api.github.com/users/tkburis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tkburis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tkburis/subscriptions",
"organizations_url": "https://api.github.com/users/tkburis/orgs",
"repos_url": "https://api.github.com/users/tkburis/repos",
"events_url": "https://api.github.com/users/tkburis/events{/privacy}",
"received_events_url": "https://api.github.com/users/tkburis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sgugger,\r\n\r\nI've looked at the CircleCI test and it seems like it failed because of a timeout when installing from PIP, not because of styling. Is there any way to rerun the tests?",
"Indeed, everything is green on re-launch. Thanks for your contribution!"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
Missing 'to' in 'pad the inputs the maximum length'.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21115/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21115",
"html_url": "https://github.com/huggingface/transformers/pull/21115",
"diff_url": "https://github.com/huggingface/transformers/pull/21115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21115.patch",
"merged_at": 1673777011000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21114
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21114/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21114/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21114/events
|
https://github.com/huggingface/transformers/issues/21114
| 1,532,730,066
|
I_kwDOCUB6oc5bW5rS
| 21,114
|
Deprecating `position_ids` in GPTJ
|
{
"login": "KaijuML",
"id": 25499439,
"node_id": "MDQ6VXNlcjI1NDk5NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/25499439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaijuML",
"html_url": "https://github.com/KaijuML",
"followers_url": "https://api.github.com/users/KaijuML/followers",
"following_url": "https://api.github.com/users/KaijuML/following{/other_user}",
"gists_url": "https://api.github.com/users/KaijuML/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaijuML/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaijuML/subscriptions",
"organizations_url": "https://api.github.com/users/KaijuML/orgs",
"repos_url": "https://api.github.com/users/KaijuML/repos",
"events_url": "https://api.github.com/users/KaijuML/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaijuML/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Friendly ping @ArthurZucker and @younesbelkada ",
"I agree with you, opened a PR for this. Thanks for reporting. "
] | 1,673
| 1,677
| 1,677
|
NONE
| null |
### System Info
latest transformers version, pytorch
### Who can help?
Hi @ArthurZucker and @younesbelkada, let me know if someone else should be tagged in this, esp. considering this is a harmless "bug", not something really urgent.
Basically what the title says: I think the position_ids in the [GPT-J code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gptj/modeling_gptj.py) should be deprecated, since they are not used anywhere as far as I can tell. Something along the line of what bloom does at:
https://github.com/huggingface/transformers/blob/main/src/transformers/models/bloom/modeling_bloom.py#L891
I can also submit a PR if you want, just wanted to make sure I didn't overlook anything.
Let me know,
Clément
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
Cleaner code, no useless no-op
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21114/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21114/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21113
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21113/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21113/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21113/events
|
https://github.com/huggingface/transformers/pull/21113
| 1,532,727,858
|
PR_kwDOCUB6oc5HWkqr
| 21,113
|
Fixing offline mode for pipeline (when inferring task).
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> So the current `test_offline_mode` should be split in two, with one test just doing TRANSFORMERS_OFFLINE=1 and somehow testing there are no calls to the Hub, and a second test `test_cached_is_used_when_offline` where we do the mock but don't touch TRANSFORMERS_OFFLINE. The test you're adding should be like `test_offline_mode` and not mock anything (unless it's to check there are no calls to the internet).\r\n\r\nFor the added test this is indeed the only thing proving it fails when the code doesn't check the offline mode.\r\n\r\nFor the other tests if I understand correctly it's `TRANSFORMERS_OFFLINE=1` -> Users asks us to not touch internet, regardless of if internet is available or not. We should FAIL if we're hitting the internet (hence the mock).\r\n\r\nIf internet is not available, regardless of `TRANSFORMERS_OFFLINE` we should default to the cached version.\r\n\r\nThat's ok for the `from_pretrained` but I don't think this is doable with the pipeline task, because it's not included directly in the model + config, right ? Only the README.md has that information, of which we do not have a cached version, correct ? (Don't think we should either).\r\n\r\nIf that's correct, then I'm ok with splitting the tests, but the mock should still probably be in both tests, 1 to fake a buggy internet, the other to make sure we trigger an failure when we actually use internet even after been explicitly asked not to do it, no ?\r\n\r\n(We could change the mock errors strings to reflect that difference)",
"Agreed! And yes the pipeline task for those tests needs to be passed, it can't be retrieved in offline mode/internet fails.",
"Made the changes is that what you had in mind ?"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
```pyton
pipe = pipeline(model="xx")
```
Was actually using network even when `TRANSFORMERS_OFFLINE=1` was used.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21113/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21113",
"html_url": "https://github.com/huggingface/transformers/pull/21113",
"diff_url": "https://github.com/huggingface/transformers/pull/21113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21113.patch",
"merged_at": 1673965481000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21112
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21112/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21112/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21112/events
|
https://github.com/huggingface/transformers/pull/21112
| 1,532,675,335
|
PR_kwDOCUB6oc5HWZXh
| 21,112
|
Refactoring of the text generate API docs
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,674
| 1,673
|
CONTRIBUTOR
| null |
This is part 2 of the refactoring efforts in the text generation docs. The first part (here - https://github.com/huggingface/transformers/pull/21090) adds an introductory guide with examples.
The second part of the refactor (this PR) reduces repetitive examples and somewhat trims down the API reference doc. Only documentation is affected by this PR.
The text generation API doc page can be trimmed down even further If we remove the docstrings of individual methods like greedy_search(), contrastive_search(), etc. At the moment in 99% of the cases, one can use generate() directly. If it was 100%, I would remove these from the docs. However, I have kept them in the doc at the moment.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21112/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21112",
"html_url": "https://github.com/huggingface/transformers/pull/21112",
"diff_url": "https://github.com/huggingface/transformers/pull/21112.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21112.patch",
"merged_at": 1673976229000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21111
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21111/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21111/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21111/events
|
https://github.com/huggingface/transformers/pull/21111
| 1,532,589,432
|
PR_kwDOCUB6oc5HWGrj
| 21,111
|
[VideoMAE] Fix docstring
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Cc @ydshieh seems like the CI has only run 7 checks. Any idea why?",
"Because it doesn't touch any code. Tests are only run for code changes.",
"Because it doesn't touch any code. Tests are only run for code changes."
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a docstring for VideoMAE (the model doesn't have a CLS token).
Fixes #21016
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21111/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21111",
"html_url": "https://github.com/huggingface/transformers/pull/21111",
"diff_url": "https://github.com/huggingface/transformers/pull/21111.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21111.patch",
"merged_at": 1673858376000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21110
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21110/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21110/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21110/events
|
https://github.com/huggingface/transformers/issues/21110
| 1,532,447,654
|
I_kwDOCUB6oc5bV0um
| 21,110
|
Add support for BLIP and GIT in image-to-text and VQA pipelines
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi @NielsRogge , can work on it?",
"Sure @atturaioe feel free to start working on it ",
" I am writing to inquire about the possibility of me starting work on this issue. @NielsRogge can I contribute?",
"@NielsRogge is this issue still open for contribution?",
"Yes ",
"@NielsRogge If nobody is working on it, I would like to pick up the issue.",
"I would like to pick the issue if its still available.",
"@NielsRogge is this issue still open to contribute . I would like to work on it ",
"Support for BLIP in the image-to-text pipeline has been added in #21904. GIT can be added as explained in [this comment](https://github.com/huggingface/transformers/issues/21514#issuecomment-1446420536), feel free to open a PR.\r\n\r\nSupport for the VQA pipeline still needs to be added for both, also there contributions are welcome.",
"@NielsRogge can I work on this issue??",
"Hello @NielsRogge ! \r\n\r\nI would like to work on this issue (add support for VQA to GIT model) as a first contribution.\r\n\r\nBut before I start, I have a question :\r\n\r\nCurrently the only model implementing the VQA pipeline is `ViltForQuestionAnswering`, it does the task using [classification](https://github.com/huggingface/transformers/blob/4baa34c18f18274fe028ad5a5511ea3fba9eeece/src/transformers/models/vilt/modeling_vilt.py#L1079) \r\n\r\nHowever in [GIT paper](https://arxiv.org/abs/2205.14100) they say that : \r\n\r\n> For VQA, the input question is treated as a text prefix, and the answer is generated in an auto-regressive way. Furthermore, we present a new generation-based scheme for ImageNet classification, where the predicted labels come directly from our generative model without pre-defining the vocabulary.\r\n\r\nSo I wonder if I should implement it as a classifier or should I follow the paper ? \r\n\r\nThanks",
"Hi @marechaux, we will need to implement the 2 different approaches in the VQA pipeline. ViLT and GIT indeed solve VQA entirely different (ViLT is a classifier whereas GIT is a generative GPT-like model).",
"> Support for BLIP in the image-to-text pipeline has been added in #21904. GIT can be added as explained in [this comment](https://github.com/huggingface/transformers/issues/21514#issuecomment-1446420536), feel free to open a PR.\r\n> \r\n> Support for the VQA pipeline still needs to be added for both, also there contributions are welcome.\r\n\r\nHey @NielsRogge, took a shot at this. Am I correct in understanding that the ideal implementation of \"microsoft/git-base\" in the image-to-text pipeline would look something like this?\r\n\r\n```python\r\nfrom transformers import AutoProcessor, GitForVision2Seq\r\n\r\nprocessor = AutoProcessor.from_pretrained(\"microsoft/git-base\")\r\nmodel = GitForVision2Seq.from_pretrained(\"microsoft/git-base\")\r\n\r\npipe = pipeline(\"image-to-text\", model=model, image_processor=processor.image_processor, tokenizer=processor.tokenizer)\r\nprint(pipe(\"https://www.wideopenpets.com/wp-content/uploads/sites/6/2021/12/Popular-Horse-Feature-Image.png\"))\r\n```\r\n\r\nIf so, I got this to work by:\r\n\r\n1. Adding the GitForVision2Seq class and making it available for imports / in MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES\r\n2. Updating src/transformers/models/git/processing_git.py to use a custom GITImageProcessor. This GITImageProcessor is an exact copy of the CLIPImageProcessor that GitProcessor already wraps, with the only difference being how the GITImageProcessor.preprocess method returns data when being called by the ImageToTextPipeline.preprocess method (Basically adding the input_ids key with a value of None ). \r\n\r\nSo the GITImageProcessor.preprocesor ends with this:\r\n\r\n```python\r\ndata = {\"pixel_values\": images} \r\nreturn_data = BatchFeature(data=data, tensor_type=return_tensors) \r\nreturn_data['input_ids'] = None \r\nreturn return_data\r\n```\r\n\r\nrather than the CLIPImageProcessor.preprocessor method returning this\r\n```python\r\ndata = {\"pixel_values\": images} \r\nreturn BatchFeature(data=data, tensor_type=return_tensors) \r\n```\r\n\r\nCurious your thoughts on this approach. How would this would affect other GIT image processing workflows (i.e. VQA, etc.)? Could we can use a conditional to account for those?",
"Thanks for taking a stab at this. I'm fine with adding a `GitForVision2Seq` (as proposed by @Narsil) however it'd be great to not having to add a custom `GITImageProcessor`. What's the reason this is added? Is it only to include \"input_ids\" which are set to `None`?",
"Exactly this - 'only to include \"input_ids\" which are set to None?'\r\n\r\nI see how adding an entirely new GITImageProcessor seems excessive when all it would do is add the Input_ids : None key value pair to data being returned from the .preprocess method. \r\n\r\nAs you describe here, https://github.com/huggingface/transformers/issues/21514#issuecomment-1446359970, Once we hit the preprocess method in ImageToTextPipeline and map the model to git, the model_inputs are returned (via the CLIPImageProcessor through the GITProcessor in processing_git.py) without the input_ids key. So AFAIK, the best we can do is modify the return value of the CLIPImageProcessor.preprocess method without changing the CLIPImageProcessor class by replicating the CLIPImageProcessor, rebranding it as a GITImageProcessor, and modify the .preprocess method. \r\n\r\nLet me know if that works or if you feel there is a better approach. Is the idea that there would be some way to do this within GitForVision2Seq?\r\n\r\nAs an aside, I read some best practices for working in the transformers library (https://huggingface.co/transformers/v4.10.1/add_new_model.html#general-overview-of-transformers). Would it be preferable to copy the entire CLIPImageProcessor class as GITImageProcessor within processing_git.py or do something more like this within processing_git.py.\r\n\r\n```python\r\nclass GITImageProcessor(CLIPImageProcessor):\r\n def preprocess(self, *args, **kwargs):\r\n # Call the original preprocess method\r\n return_data = super().preprocess(*args, **kwargs)\r\n \r\n # Add 'input_ids' key to the data\r\n return_data['input_ids'] = None\r\n\r\n return return_data\r\n```",
"Hmm I don't get why `input_ids` need to be set to `None`. Could you clarify?\r\n\r\n[This example](https://huggingface.co/docs/transformers/model_doc/git#transformers.GitForCausalLM.forward.example) shows that you only need to pass `pixel_values` to the `generate` method to do image captioning.",
"Hello, it seems that the BLIP for the image to text pipeline has been completed, however that the VQA pipeline for both BLIP & GIT are not complete, along with the image to text pipeline for GIT. @marechaux how is the VQA going for GIT?",
"Hi! I'm also interested in helping out if we can divide the work :) ",
"Hey @NielsRogge , I was working on VQA pipeline for BLIP but i am confused how can i give `pixel_values` to `_forward` method in `VisualQuestionAnsweringPipeline` [(src)](https://github.com/Tanmaypatil123/transformers/blob/main/src/transformers/pipelines/visual_question_answering.py#L19) because BLIP requires pixel values and those are generated by preprocessor . Sorry if this is silly question because this is my first open source contribution .",
"Hi @Tanmaypatil123 there's already this PR: #23348. Feel free to take it over/improve it",
"Hello, can I work on this?",
"Hi Team, Can I start working on it ? ",
"Hi @NielsRogge, I would like to try to add GIT for VQA as my first contribution, is it ok?\r\nI looked at #23348 , and I want to know if it is fine to return the full generated text, I make it work locally, so I could prepare a PR if no one else is working on this.\r\n\r\nI believe the input_ids or its lenght could be used in the postprocess of VisualQuestionAnsweringPipeline to remove the prompt/prefix, like in TextGenerationPipeline, but it will require to do a refactor in _forward in VisualQuestionAnsweringPipeline to return also the input_ids.\r\n\r\ne.g.\r\nhttps://github.com/huggingface/transformers/blob/fe3c8ab1af558b95f67f5fafc0c55f09fd2b09db/src/transformers/pipelines/text_generation.py#L294-L305",
"Is this still open for contribution? Would love to help out. ",
"Hi @astern21, I started a draft PR, but didn't get to finish it, and now I'm not really working on it."
] | 1,673
| 1,700
| null |
CONTRIBUTOR
| null |
### Feature request
BLIP and GIT are 2 recent additions in the library, providing state-of-the-art performance for tasks like image captioning and visual question answering (VQA). GIT is even capable of video captioning and video QA.
Hence it makes sense to support them in our image-to-text and VQA pipelines.
### Motivation
Having support for better models in pipelines is very desired!
See also a request for it here: https://discuss.huggingface.co/t/support-for-different-models-in-text-to-image-pipeline/29504
### Your contribution
I can assist in adding support, see #18446 as a very similar case
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21110/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21110/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21109
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21109/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21109/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21109/events
|
https://github.com/huggingface/transformers/pull/21109
| 1,532,382,346
|
PR_kwDOCUB6oc5HVZmz
| 21,109
|
Add visualbert in visual question answering model mapping
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"No, but we should check if the pipeline testing (in the PR CI job page) runs against this newly added `visual_bert`.\r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"VisualBERT isn't supported by the VQA pipeline, the pipeline is currently very specifically implemented to work with ViLT."
] | 1,673
| 1,700
| 1,673
|
COLLABORATOR
| null |
Following the doc https://huggingface.co/docs/transformers/v4.25.1/en/model_doc/visual_bert#transformers.VisualBertForQuestionAnswering I reckon this one was missing.
## Who can review?
@ydshieh do I need to write any additional test for this?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21109/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21109",
"html_url": "https://github.com/huggingface/transformers/pull/21109",
"diff_url": "https://github.com/huggingface/transformers/pull/21109.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21109.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21108
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21108/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21108/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21108/events
|
https://github.com/huggingface/transformers/issues/21108
| 1,532,365,733
|
I_kwDOCUB6oc5bVgul
| 21,108
|
QuestionAnsweringPipeline top_k returns single result
|
{
"login": "henrique-b",
"id": 113020702,
"node_id": "U_kgDOBryPHg",
"avatar_url": "https://avatars.githubusercontent.com/u/113020702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/henrique-b",
"html_url": "https://github.com/henrique-b",
"followers_url": "https://api.github.com/users/henrique-b/followers",
"following_url": "https://api.github.com/users/henrique-b/following{/other_user}",
"gists_url": "https://api.github.com/users/henrique-b/gists{/gist_id}",
"starred_url": "https://api.github.com/users/henrique-b/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/henrique-b/subscriptions",
"organizations_url": "https://api.github.com/users/henrique-b/orgs",
"repos_url": "https://api.github.com/users/henrique-b/repos",
"events_url": "https://api.github.com/users/henrique-b/events{/privacy}",
"received_events_url": "https://api.github.com/users/henrique-b/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You are entirely correct to wish for it.\r\n\r\nThe current behavior is linked to our commitment to not break things. \r\n\r\nIf you actually check out the code, you'll see this is an exception because there's only 1 returned value (Because only 1 return value is possible).\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L596-L597\r\nThere's actually another one: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L391-L392\r\n\r\nSince we're not breaking things, and this pipeline was written a long time ago, it was not necessarily aligned with other pipelines. So these quircks are unfortunately necessary.\r\n\r\nThis is the one thing I would like to modify in V5 (cleanup pipelines return types to make them extremely consistent).\r\n\r\nI hope you understand the current state of things. Not sure if there's anything to be done about it."
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.15.0-1025-gcp-x86_64-with-glibc2.31
- Python version: 3.9.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.9.0+cu111 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes: tesla T4
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using a QuestionAnsweringPipeline with the the `top_k` parameter set to a number greater than 1, the model can still return a single answer in the form of a dictionary.
Example to reproduce bug:
```[python]
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, QuestionAnsweringPipeline
pipeline = QuestionAnsweringPipeline(
model=AutoModelForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad"),
tokenizer=AutoTokenizer.from_pretrained("distilbert-base-cased-distilled-squad")
)
pipeline([{
"context": " 1 ",
"question": "What is Anne's age?"
}], top_k=10)
```
### Expected behavior
When the `top_k` parameter, I would expect that the call to the model returns a list containing the best predictions up to the tenth, when possible. If the model only outputs one answer, I would expect this answer to be within a list.
When there are no possible answer, the returned value is an empty list. When there are multiple answers, the returned value is also a list. Outputting a dictionary creates an edge case that needs to be handled when, for example, iterating over the outputs of the model
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21108/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21107
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21107/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21107/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21107/events
|
https://github.com/huggingface/transformers/pull/21107
| 1,532,212,224
|
PR_kwDOCUB6oc5HU0-S
| 21,107
|
Update `TFTapasEmbeddings`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
Update `TFTapasEmbeddings` to fix `test_embeddings_out_of_bounds_raise_exception` for `TFTapas`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21107/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21107",
"html_url": "https://github.com/huggingface/transformers/pull/21107",
"diff_url": "https://github.com/huggingface/transformers/pull/21107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21107.patch",
"merged_at": 1673879391000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21106
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21106/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21106/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21106/events
|
https://github.com/huggingface/transformers/pull/21106
| 1,532,202,897
|
PR_kwDOCUB6oc5HUy6h
| 21,106
|
Update modeling doc strings FE -> IP
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Replaces references of feature extractors with image processors that had been missed in the first batch of changes
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21106/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21106",
"html_url": "https://github.com/huggingface/transformers/pull/21106",
"diff_url": "https://github.com/huggingface/transformers/pull/21106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21106.patch",
"merged_at": 1674213491000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21105
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21105/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21105/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21105/events
|
https://github.com/huggingface/transformers/pull/21105
| 1,532,070,674
|
PR_kwDOCUB6oc5HUWAq
| 21,105
|
Make `test_save_pretrained_signatures` slow test
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks! I'd like to have @Rocketknight1 and @gante approve on this before we merge (or find a way to make this test slower)."
] | 1,673
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
`UtilsFunctionsTest.test_save_pretrained_signatures` in `tests/test_modeling_tf_common.py` is introduced on Oct. 2022, which runs about ~60 seconds, and failed a lot of times **on Push CI** due to the timeout limit on Push CI job setting (`PYTEST_TIMEOUT`).
Notice that, on CircleCI, we have `PYTEST_TIMEOUT=120` - as we use flag `-n 8` to run tests in parallel. I don't want to set `120` for Push CI at this moment (without running with it first), as it might make the CI slow down (a lot).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21105/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21105",
"html_url": "https://github.com/huggingface/transformers/pull/21105",
"diff_url": "https://github.com/huggingface/transformers/pull/21105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21105.patch",
"merged_at": 1674034986000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21104
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21104/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21104/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21104/events
|
https://github.com/huggingface/transformers/pull/21104
| 1,531,945,616
|
PR_kwDOCUB6oc5HT6rH
| 21,104
|
[Tokenizers] Fix a small typo
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #21073, there was a typo in the `__repr__` method of the `PretrainedTokenizer` which uses the previous `model_max_len` instead of `model_max_length`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21104/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21104",
"html_url": "https://github.com/huggingface/transformers/pull/21104",
"diff_url": "https://github.com/huggingface/transformers/pull/21104.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21104.patch",
"merged_at": 1673623295000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21103
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21103/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21103/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21103/events
|
https://github.com/huggingface/transformers/issues/21103
| 1,531,660,482
|
I_kwDOCUB6oc5bS0jC
| 21,103
|
Fine-tuning wav2vec2 model: eval_loss & eval_wer keep increasing
|
{
"login": "lgq-liao",
"id": 12348652,
"node_id": "MDQ6VXNlcjEyMzQ4NjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/12348652?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lgq-liao",
"html_url": "https://github.com/lgq-liao",
"followers_url": "https://api.github.com/users/lgq-liao/followers",
"following_url": "https://api.github.com/users/lgq-liao/following{/other_user}",
"gists_url": "https://api.github.com/users/lgq-liao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lgq-liao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lgq-liao/subscriptions",
"organizations_url": "https://api.github.com/users/lgq-liao/orgs",
"repos_url": "https://api.github.com/users/lgq-liao/repos",
"events_url": "https://api.github.com/users/lgq-liao/events{/privacy}",
"received_events_url": "https://api.github.com/users/lgq-liao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It's hard to work out the exact issue but every time you run `ctc_finetune.py` you're resetting the learning rate and performing warmup all over again. This means that your training parameters are going to be all over the show so you only really want to run a training script once.\r\n\r\nWith a dataset that large, you're probably better off [sharding](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) the datasets and then saving the shards to disk in sizes of around 1GB.\r\n\r\n```python\r\nnum_shards = 50 # or whatever\r\nds = ... # some large dataset\r\n\r\nfor shard_idx in range(num_shards)):\r\n shard = ds.shard(num_shards, shard_idx, contiguous=True)\r\n shard.save_to_disk(f\"shard_{shard_idx}\")\r\n```\r\n\r\nThen load the dataset by concatenating the shards. Something like:\r\n\r\n```python\r\nfrom datasets import load_from_disk, concatenate_datasets\r\n\r\nds = concatenate_datasets([load_from_disk(shard_fp) for shard_fp in shard_paths])\r\n```\r\n\r\nThen running the custom training script again after making these changes or following the guide [here](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2#prepare-data-tokenizer-feature-extractor) and tweaking the notebook.\r\n\r\nFrom a performance perspective, you're probably more likely to get better results with the [XLS-R weights](https://huggingface.co/facebook/wav2vec2-xls-r-300m) so try that first.\r\n\r\nAlso the [forums](https://discuss.huggingface.co/) are probably a better source of help for this than here.",
"Also note that CTC training is not very robust, I'd recommend to try out a bunch of different hyper-paremeters. The most important one to get right is usually the learning rate and dropout ",
"Also cc @sanchit-gandhi ",
"> ```python\r\n> ds = concatenate_datasets([load_from_disk(shard_fp) for shard_fp in shard_paths])\r\n> ```\r\n\r\nThanks **OllieBroadhurst** and **Patrickvonplaten** for the guide. \r\n\r\n- I had tried to create all the dataset into one ds file, which had no issue to create such ds in my end, expected it was extremely slow (about few days). \r\n- With the large ds file, then I run the training script above, somehow it hung and stucked forever at the loading of ds. That's why I split the dataset into small batch. \r\n- I haven't tried the **shading** soultion, just curiously, by concatenate_datasets, will it be the same effect as creating the one ds file? \r\n ",
"As a rule of thumb, using datasets of 1GB or less shouldn't cause problems. Sharding just chunks the dataset up into smaller datasets, not too different to what you were doing before. Replace the line of code where you save your large dataset with the code that splits it into shards and saves each shard. No need to save the whole thing as one large dataset.\r\n\r\nThe `concatenate_datasets` should work fine. The contents of the dataset wont be loaded into memory, it's only done so while it's being iterated over during training.\r\n\r\n",
"Thank you **OllieBroadhurst** for the prompt response.\r\n- Yes I concatenated all the small ds in to one as you suggested. The process was pretty fast, only took seconds to reach\r\nthe line of **data_collator = DataCollatorCTCWithPadding(processor=processor)**, and it was stucking there or the **Trainer initialization** for a long time.\r\n- It took long time to go through this portion of code as follows:\r\n`\r\n//Instantiate custom data collator\r\n data_collator = DataCollatorCTCWithPadding(processor=processor)\r\n\r\n //Initialize Trainer\r\n trainer = Trainer(\r\n model=model,\r\n data_collator=data_collator,\r\n args=training_args,\r\n compute_metrics=compute_metrics,\r\n train_dataset=train_dataset if training_args.do_train else None,\r\n eval_dataset=eval_dataset if training_args.do_eval else None,\r\n tokenizer=feature_extractor,\r\n )\r\n` \r\n- It seems it only use single thread to execute the data process in somewhere. Does it possible to speed up the data process ? (eg, by multiple threads) \r\n",
"This is almost certainly the trainer which is pretty complex so it's hard to tell why. You can check if it's the dataset by using the top few rows, like `train_dataset.select(range(100))` and seeing if a smaller model works. Also check your memory usage maybe.\r\n\r\nI'm not 100% what you mean by \"data process\"?",
"> This is almost certainly the trainer which is pretty complex so it's hard to tell why. You can check if it's the dataset by using the top few rows, like `train_dataset.select(range(100))` and seeing if a smaller model works. Also check your memory usage maybe.\r\n> \r\n> I'm not 100% what you mean by \"data process\"?\r\n\r\n- It only takes 7GB out of 128GB total memory. \r\n- And 2 out of 32 cores taken, 2GB out of 48GB used for each GPU(total 2) .\r\n- Distributed launch the training with local_rank=2 as follows : \r\n python -m torch.distributed.launch **--nproc_per_node=2** ctc_finetune.py --train --eval -lr 2\r\n- ~20 hours passed, so far, it is still stucking in same steps mentioned above. \r\n- If I use small dataset, it works. For the size of 100K samples dataset, it will take about **30**minutes to pass through the above steps.\r\n- I jus felt that the stucking might be caused by the itration of the dataset elements in the training code somewhether. However I did change any code from the libs. No sure if it caused by my configurations\r\n- I had put my model output configs at [Json_Configs](https://drive.google.com/drive/folders/1p19ohigIUxlGyLei72ZfABXVrI611h3I?usp=share_link). Appreciate if you can have a look on it. ",
"Start by eliminating any multi-device influence. Try `python -m torch.distributed.launch ctc_finetune.py --train --eval -lr 2`. If that doesn't work, try `python -m ctc_finetune.py --train --eval`.",
"Thanks OllieBroadhurst\r\n- I will try out your suggestion and let you know the result.\r\n- For the stucking issue, after 21hours, it finally went through the stucking point above, and started the training iteration as follow:\r\n 1%|▎ | 42500/**4338800** [7:42:00<**590:50:23**, 2.02it/s]Saving model checkpoint to /mnt/workspace/output/train_eval/checkpoint-42500\r\nConfiguration saved in /mnt/workspace/output/train_eval/checkpoint-42500/config.json\r\n\r\n{'loss': 286.0071, 'learning_rate': 4.9099999999999994e-05, 'epoch': 0.0}\r\n{'loss': 114.5247, 'learning_rate': 0.0002977528714424097, 'epoch': 0.16}\r\n{'loss': 111.7823, 'learning_rate': 0.0002977182757507265, 'epoch': 0.17}\r\n{'loss': 112.0589, 'learning_rate': 0.00029754529729231053, 'epoch': 0.18}\r\n{'loss': 110.4228, 'learning_rate': 0.0002973033350246782, 'epoch': 0.19}\r\n{'loss': 109.3662, 'learning_rate': 0.00029723414364131185, 'epoch': 0.2}\r\n\r\n- Currently the estimated training is about 590 hours, I need increase the **train_batch_size** to reduce the time (the GPU load was only taken 13G/48G for each GPU). \r\n- Is there other way to reduce the training time, other than train_batch_size ? ",
"You probably won't need the full 590 hours. You would most likely stop when the `eval_loss` starts plateauing. Try the XLS-R model if you aren't already, the loss should converge quicker. Feel free to increase `train_batch_size` but also consider the `fp16` argument which will help things a lot.\r\n\r\nYou might also want to increase the steps between evaluation which can take a lot of time depending on your eval dataset size.",
"> Start by eliminating any multi-device influence. Try `python -m torch.distributed.launch ctc_finetune.py --train --eval -lr 2`. If that doesn't work, try `python -m ctc_finetune.py --train --eval`.\r\n\r\nHello **OllieBroadhurst** , thanks for the guiding. \r\n- I had tried `python -m torch.distributed.launch ctc_finetune.py --train --eval -lr 2` as your suggestion. it ended with same stucking as previously. \r\n- I traced the caller and identified the stucked point as follow:\r\n 1. `processor = AutoProcessor.from_pretrained(training_args.output_dir)`\r\n 2. `data_collator = DataCollatorCTCWithPadding(processor=processor)`\r\n 3. `trainer = Trainer(****)`\r\n 4. `train_result = trainer.train(resume_from_checkpoint=checkpoint)`\r\n `-> find_executable_batch_size -> self._inner_training_loop`\r\n ` ->train_dataloader = self.get_train_dataloader()` ----- **this was the stuck point, looks it was stucking at DataLoader()** \r\n- I suspected it might casued by the line of **self._remove_unused_columns(train_dataset, description=\"training\")** which inside of the caller of `get_train_dataloader()` as my dataset has one unused column **input_length**, however, I set the r**emove_unused_columns=False** in TrainingArguments, and rerun it, the stucking still there. \r\n- And ideal about the stucking cause ?",
"\r\n\r\n\r\n> You probably won't need the full 590 hours. You would most likely stop when the `eval_loss` starts plateauing. Try the XLS-R model if you aren't already, the loss should converge quicker. Feel free to increase `train_batch_size` but also consider the `fp16` argument which will help things a lot.\r\n> \r\n> You might also want to increase the steps between evaluation which can take a lot of time depending on your eval dataset size.\r\n\r\n- Yes, fp16 set to True in my script\r\n- And I increased the `train_batch_size` and `eval_batch_size` to maximize my training PC load. \r\n- For the model fine-tuning with customized Singapre English dataset, what base model should I started with ? \r\n- XLS-R, wav2vec2-large-robust-ft-libri-960h or wav2vec2-large-robust, what's the different if I pick up one of them as my base model? ",
"I would set `group_by_length` to `False` if you haven't already. This can take a very, very long time for large datasets like yours.\r\n\r\n`wav2vec2-large-robust` hasn't been fine-tuned, `wav2vec2-large-robust-ft-libri-960h` has been fine-tuned on English (Librispeech). `XLS-R` has been pretrained on 436 000 hours of audio from multiple language. It means that you'll get the best \"head start\" using those weights.",
"> I would set `group_by_length` to `False` if you haven't already. This can take a very, very long time for large datasets like yours.\r\n> \r\n> `wav2vec2-large-robust` hasn't been fine-tuned, `wav2vec2-large-robust-ft-libri-960h` has been fine-tuned on English (Librispeech). `XLS-R` has been pretrained on 436 000 hours of audio from multiple language. It means that you'll get the best \"head start\" using those weights.\r\n\r\n- Thanks so much for the issue troubleshooting. The `group_by_length= True` for my settings currently. I think it could be the cause. Will disable it and give a try.\r\n- Can I used fine tuned model such as `wav2vec2-large-robust-ft-libri-960h` to continue fine-tuning with my own dataset ?\r\n",
"You can go with whatever weights/architecture you like! `wav2vec2-large-robust-ft-libri-960h` is really great for English but I haven't tried it on other languages yet, feel free to give it a shot.",
"Thank you **OllieBroadhurst** for the advice. \r\n- Yup, my intention is to do the **incremental** training, in another word, to accumulate the multiple runs of training result to the existing model. \r\n- In order to achieve the goal above, I am not sure whether it's possible to fine tune the new dataset with the fine-tuned model \r\n- The answer seems you already gave to me at your first reply : **you're resetting the learning rate and performing warmup all over again**.\r\n- So my question is how can I achieve the **incremental** like training? \r\n - In my case, should I prepare all the datasets in one go ? Means start with **wav2vec2-large-robust + libri-960h dataset + my custermized dataset**\r\n",
"Really great point from @OllieBroadhurst about the learning rate reset! Related to this, it's worth making sure you're reloading your optimiser states if you're resuming training from a checkpoint to avoid the momentum (optimiser) terms being reset each time.\r\n\r\nRegarding incremental training, is this purely to save disk space (i.e. download smaller chunks of the data at a time)? Maybe you could try running training using [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet)? This way, you won't ever have to download anymore than 1 batch worth of data to your device at a time. You can check out the script [run_speech_recognition_ctc_streaming.py](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_streaming.py). It works in much the same way as the non-streaming script, so should look quite familiar! This will enable you to train your model in one go. \r\n\r\nThe main changes in using this script revolve around the fact that we don't know the **size** of our dataset a-priori, so we can't specify the number of training epochs. Instead, we have to specify a number of train steps. You can work out how many train steps you require: 1 epoch is `floor( total num samples / train batch size )` samples. ",
"Hello **Sanchit-gandhi**, thanks for the elaboration. \r\n- For the incremental training, what I am looking for is **to have a methodology to add on the addition new dataset** to existing pre-trained / fine-tuned model. \r\n- More specific, for example I want to fine-tune the model **wav2vec2-large-robust-ft-libri-960h** with my own dataset. Here are two options in my mind to run the fine-tuning:\r\n 1. Load the model wav2vec2-large-robust-ft-libri-960h + my dataset. \r\n 2. Load the original model **wav2vec2-large-robust** + **libri-960h dataset** + my dataset\r\n- For the **option 1**, my concern is the **learning rate reset** issue\r\n- For the **option 2**, my concern is, since we already have fine-tuned weight `wav2vec2-large-robust-ft-libri-960h`, it's no point to re-run it again. \r\n- Hope my explanation is clear. Do you have other option to recommend?",
"Hey @lgq-liao, thanks for the clarification. Indeed if you load the checkpoint [`wav2vec2-large-robust-ft-libri-960h`](https://huggingface.co/facebook/wav2vec2-large-robust-ft-libri-960h) there is little point in fine-tuning on LS 960h. Here, it only really makes sense to fine-tune on your own dataset.\r\n\r\nIf you're take the fine-tuned checkpoint `wav2vec2-large-robust-ft-libri-960h` and train it further on your additional dataset, you don't need to worry too much about matching the learning rates. I would just treat it as a new fine-tuning run and apply a standard learning rate procedure here. Since your dataset is likely out-of-domain with LS 960h, in effect it's like starting a new training run.",
"\r\n@sanchit-gandhi : got you. Thanks for the guiding. \r\n@OllieBroadhurst : The dataset loading stuck issue goes away after I set `group_by_length=False ` :+1: \r\n ",
"The training was running 140+ hours and it looks good before the **EPOCH 12.36**. \r\nWould you please tell me what was wrong after the EPOCH 12.36? \r\nHere is the log as follows: \r\n```\r\n{'loss': 142.6154, 'learning_rate': 0.00028539729505169863, 'epoch': 0.0}\r\n{'loss': 116.9755, 'learning_rate': 0.0002705168020679468, 'epoch': 1.0}\r\n{'eval_loss': 80.37191009521484, 'eval_wer': 0.07217616514530197, 'eval_runtime': 325.7806, 'eval_samples_per_second': 236.19, 'eval_steps_per_second': 7.382, 'epoch': 1.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 105.4153, 'learning_rate': 0.0002554974150664697, 'epoch': 2.0}\r\n{'eval_loss': 73.21361541748047, 'eval_wer': 0.06538271103380858, 'eval_runtime': 316.0957, 'eval_samples_per_second': 243.426, 'eval_steps_per_second': 7.608, 'epoch': 2.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 96.2782, 'learning_rate': 0.0002404780280649926, 'epoch': 3.0}\r\n{'eval_loss': 66.80726623535156, 'eval_wer': 0.06313430751456828, 'eval_runtime': 316.3667, 'eval_samples_per_second': 243.218, 'eval_steps_per_second': 7.602, 'epoch': 3.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 89.7653, 'learning_rate': 0.00022545891802067946, 'epoch': 4.0}\r\n{'eval_loss': 61.151939392089844, 'eval_wer': 0.06073101679637412, 'eval_runtime': 315.8389, 'eval_samples_per_second': 243.624, 'eval_steps_per_second': 7.615, 'epoch': 4.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 83.1802, 'learning_rate': 0.00021043939254062035, 'epoch': 5.0}\r\n{'eval_loss': 57.170997619628906, 'eval_wer': 0.0541305368999708, 'eval_runtime': 316.7452, 'eval_samples_per_second': 242.927, 'eval_steps_per_second': 7.593, 'epoch': 5.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 74.4893, 'learning_rate': 0.00019541972858197931, 'epoch': 6.0}\r\n{'eval_loss': 53.97002410888672, 'eval_wer': 0.056226592354666295, 'eval_runtime': 316.7992, 'eval_samples_per_second': 242.886, 'eval_steps_per_second': 7.592, 'epoch': 6.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 73.0806, 'learning_rate': 0.0001804002031019202, 'epoch': 7.0}\r\n{'eval_loss': 49.42509841918945, 'eval_wer': 0.04158086508309317, 'eval_runtime': 316.5823, 'eval_samples_per_second': 243.052, 'eval_steps_per_second': 7.597, 'epoch': 7.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 65.0651, 'learning_rate': 0.0001653810930576071, 'epoch': 8.0}\r\n{'eval_loss': 44.11850357055664, 'eval_wer': 0.040122132365076744, 'eval_runtime': 316.0374, 'eval_samples_per_second': 243.471, 'eval_steps_per_second': 7.61, 'epoch': 8.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 58.1436, 'learning_rate': 0.00015036170605613, 'epoch': 9.0}\r\n{'eval_loss': 40.9358024597168, 'eval_wer': 0.04316401538715452, 'eval_runtime': 315.3451, 'eval_samples_per_second': 244.006, 'eval_steps_per_second': 7.627, 'epoch': 9.0}\t\t\t\t\t\t\r\n\t\t\t\t\t\t.\r\n{'loss': 54.2643, 'learning_rate': 0.00013534231905465286, 'epoch': 10.0}\r\n'eval_loss': 36.22915267944336, 'eval_wer': 0.03583353434814072, 'eval_runtime': 316.2894, 'eval_samples_per_second': 243.277, 'eval_steps_per_second': 7.604, 'epoch': 10.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 47.4946, 'learning_rate': 0.00012032320901033972, 'epoch': 11.0}\r\n{'eval_loss': 32.77891159057617, 'eval_wer': 0.030601647898231492, 'eval_runtime': 316.6529, 'eval_samples_per_second': 242.998, 'eval_steps_per_second': 7.595, 'epoch': 11.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 45.0098, 'learning_rate': 0.00010530368353028065, 'epoch': 12.0}\r\n{'eval_loss': 28.909757614135742, 'eval_wer': 0.029566950626531415, 'eval_runtime': 316.2782, 'eval_samples_per_second': 243.286, 'eval_steps_per_second': 7.604, 'epoch': 12.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 42.008, 'learning_rate': 0.0001001817762186115, 'epoch': 12.34}\r\n{'loss': 40.349, 'learning_rate': 0.00010011267540620383, 'epoch': 12.34}\r\n\r\n> {'loss': 41.2924, 'learning_rate': 9.983682607090103e-05, 'epoch': 12.36}\r\n\r\n{'loss': 41.5023, 'learning_rate': 9.94907680945347e-05, 'epoch': 12.39}\r\n\t\t\t\t\t\t.\r\n{'loss': 39.5593, 'learning_rate': 9.042360598227473e-05, 'epoch': 12.99}\r\n{'loss': 40.342, 'learning_rate': 9.035436669128508e-05, 'epoch': 12.99}\t\t\t\t\t\t\r\n{'loss': 38.5417, 'learning_rate': 9.028512740029543e-05, 'epoch': 13.0}\r\n{'eval_loss': 25.61539649963379, 'eval_wer': 0.03266723374001803, 'eval_runtime': 316.6491, 'eval_samples_per_second': 243.001, 'eval_steps_per_second': 7.595, 'epoch': 13.0}\r\n\t\t\t\t\t\t.\r\n{'loss': 36.2269, 'learning_rate': 9.021588810930574e-05, 'epoch': 13.0}\r\n{'loss': 37.1206, 'learning_rate': 9.014664881831609e-05, 'epoch': 13.01}\r\n{'loss': 36.0957, 'learning_rate': 9.007754800590842e-05, 'epoch': 13.01}\r\n{'loss': 38.5678, 'learning_rate': 9.000830871491875e-05, 'epoch': 13.02}\r\n \t\t\t\t\t.\r\n{'loss': 42.091, 'learning_rate': 8.717143648449039e-05, 'epoch': 13.21}\r\n{'loss': 42.3242, 'learning_rate': 8.696427252584932e-05, 'epoch': 13.22}\r\n{'loss': 130.8234, 'learning_rate': 8.689517171344164e-05, 'epoch': 13.22}\r\n{'loss': 585.9929, 'learning_rate': 8.684545790251107e-05, 'epoch': 13.23}\r\n\r\n> {'loss': 0.0, 'learning_rate': 8.67762186115214e-05, 'epoch': 13.23}\r\n\r\n{'loss': 0.0, 'learning_rate': 8.670697932053174e-05, 'epoch': 13.24}\r\n{'loss': 0.0, 'learning_rate': 8.663774002954209e-05, 'epoch': 13.24}\r\n{'loss': 0.0, 'learning_rate': 8.656850073855243e-05, 'epoch': 13.25}\r\n```\r\n",
"This is odd. It seems like logging changed from once per epoch to many times per epoch - probably based on the number of steps? In my mind this can only be the case if you reran training from an old checkpoint while changing `TrainingArguments`.\r\n\r\nThe `0.0` loss means that your training became unstable. It's hard to tell why because things seemed to be converging nicely until then. If you _did_ run training again and used new data, then check that none of your target transcripts are longer than the output sequence length of the model (~120 characters I think?) and that there aren't any missing values.",
"Thanks @OllieBroadhurst . \r\n> probably based on the number of steps?\r\n- Yes, the `logging_steps=500` in my configuration, it could be the cause of multiple times logging\r\n > In my mind this can only be the case if you reran training from an old checkpoint while changing TrainingArguments.\r\n- True, in fact, I had changed the `group_by_length=False` once and resumed the running from the exisiting checkpoint. \r\n- Let me add more new dataset and start a fresh run to see whether the issue goes aways\r\n\r\n> The 0.0 loss means that your training became unstable. It's hard to tell why because things seemed to be converging nicely until then. If you did run training again and used new data, then check that none of your target transcripts are longer than the output sequence length of the model (~120 characters I think?) and that there aren't any missing values.\r\n- Currently, I filtered the dataset with maximum length is 6 seconds by the `vectorized_datasets.filter`. May I know how can I map the length in seconds to characters?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"The final re-run result looks good as expected. I'd like to close this issue. Thank you all for the help."
] | 1,673
| 1,677
| 1,677
|
NONE
| null |
### System Info
- transformers version: 4.22.0.dev0
- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Both have same issue
- $ pip freeze |grep datasets
datasets==2.4.0
### Who can help?
@patrickvonplaten
@anton-l
@sanchit-gandhi
@OllieBroadhurst
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. I am fine-tuning the pre-trained model : **facebook/wav2vec2-large-robust-ft-libri-960h** by using customized dataset (3k hours Singapore English dataset).
2. I split the large dataset into multiple small batches, each batch contained about 100k+ samples.
3. Load the previous tuned output with next batch of dataset to run the training script at [ctc_finetune.py](https://drive.google.com/file/d/1NogO0G8-RtLGaisfcmBrXh6ESKZbWfZK/view?usp=sharing)
- First time loaded pre-trained model **facebook/wav2vec2-large-robust-ft-libri-960h** and frist batch of dataset
4. After that run the evaluation on the tuned model with fixed eval_dataset
5. Loop step 3&4 until end of the last batch dataset.
- Except the customized dataset, my training script [ctc_finetune.py](https://drive.google.com/file/d/1NogO0G8-RtLGaisfcmBrXh6ESKZbWfZK/view?usp=sharing) which is almost same as original one
- Train and eval log at [train_eval.log](https://drive.google.com/file/d/1nsuWis0r6inuyF80aZDn7FaXActfMAUJ/view?usp=share_link)
- I upload all my scripts and training logs in the link at https://drive.google.com/drive/folders/1M5xE4L_HBxBQynWyl6f1c-tJtat027d1?usp=share_link
- I cannot figure out where was wrong. I am not sure whether it's libs bug or my configuration mistake.
- Appricate if someone can help me take look it.
### Expected behavior
**After 20 EPOCH for each train and eval, here is the output as follows:**
dataset batch | epoch | train_samples | train_loss | eval_loss | eval_wer | eval_samples(Fixed for all the eval)
-- | -- | -- | -- | -- | -- | --
1 | 20 | 104398 | 27.24 | 268.3798 | 0.5587 | 76946
2 | 20 | 104211 | 32.74 | 389.0578 | 0.6787 | 76946
3 | 20 | 104223 | 27.24 | 436.5064 | 0.7194 | 76946
4 | 20 | 104174 | 24.86 | 469.7542 | 0.7437 | 76946
5 | 20 | 104018 | 23.01 | 484.5408 | 0.7627 | 76946
6 | 20 | 104158 | 21.79 | 508.0651 | 0.7728 | 76946
7 | 20 | 104166 | 21.21 | 503.9046 | 0.7799 | 76946
8 | 20 | 104280 | 20.44 | 516.5866 | 0.7919 | 76946
9 | 20 | 104111 | 19.6 | 508.3685 | 0.7888 | 76946
10 | 20 | 104073 | 19.31 | 525.6768 | 0.7914 | 76946
11 | 20 | 104144 | 19.04 | 534.6445 | 0.7979 | 76946
12 | 20 | 104298 | 18.8 | 525.5178 | 0.7936 | 76946
13 | 20 | 104230 | 18.62 | 520.3677 | 0.7952 | 76946
14 | 20 | 104053 | 17.95 | 526.2173 | 0.8025 | 76946
- I expected the **eval_loss** and **eval_wer** should decrease gradually.
- The above result was performed train and eval separately.
- I had tried to run the evaluatoin during the training , the result was similar, which the **eval_loss** and **eval_wer** **increased** unexceptedly as following table:
epoch | train_loss | eval_loss | eval_wer
-- | -- | -- | --
1 | 138.01 | 58.21 | 0.21
2 | 114.08 | 75.16 | 0.27
3 | 98.49 | 86.66 | 0.31
4 | 85.66 | 95.94 | 0.33
5 | 76.16 | 105.48 | 0.34
6 | 66.63 | 107.92 | 0.34
7 | 60.2 | 118.42 | 0.37
8 | 55.34 | 121.54 | 0.37
9 | 49.62 | 128.89 | 0.37
10 | 43.41 | 137.79 | 0.39
11 | 40.14 | 133.79 | 0.38
12 | 36.3 | 143.32 | 0.4
13 | 33.48 | 144.25 | 0.38
14 | 30.32 | 152.04 | 0.4
15 | 27.34 | 158.57 | 0.4
16 | 24.78 | 153.13 | 0.38
17 | 22.8 | 159.58 | 0.39
18 | 21.36 | 165.38 | 0.39
19 | 20.27 | 167.18 | 0.38
20 | 20.34 | 167.7 | 0.38
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21103/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21102
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21102/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21102/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21102/events
|
https://github.com/huggingface/transformers/pull/21102
| 1,531,076,400
|
PR_kwDOCUB6oc5HQ_so
| 21,102
|
Fix `torchscript` tests for `AltCLIP`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
Fix `torchscript` tests for `AltCLIP`.
This model uses `roberta` as text model, which has
```python
self.register_buffer(
"token_type_ids", torch.zeros(self.position_ids.size(), dtype=torch.long), persistent=False
)
```
and this requires the change in this PR to pass the test.
See [current failing job run page](https://github.com/huggingface/transformers/actions/runs/3889079663/jobs/6637067165)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21102/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21102",
"html_url": "https://github.com/huggingface/transformers/pull/21102",
"diff_url": "https://github.com/huggingface/transformers/pull/21102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21102.patch",
"merged_at": 1673600599000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21101
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21101/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21101/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21101/events
|
https://github.com/huggingface/transformers/pull/21101
| 1,530,727,869
|
PR_kwDOCUB6oc5HPzZe
| 21,101
|
Slightly better WandbCallback
|
{
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21101). All of your documentation changes will be reflected on that endpoint.",
"@stevhliu, I seemed to have messed up the code quality steps. I'll fix that soon.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Allows for more environment variables to be used with the `WandbCallback`. Prioritizes variables set in `TrainingArguments`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21101/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21101",
"html_url": "https://github.com/huggingface/transformers/pull/21101",
"diff_url": "https://github.com/huggingface/transformers/pull/21101.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21101.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21100
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21100/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21100/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21100/events
|
https://github.com/huggingface/transformers/issues/21100
| 1,530,705,602
|
I_kwDOCUB6oc5bPLbC
| 21,100
|
Models for low resource languages
|
{
"login": "RaiAmanRai",
"id": 102528851,
"node_id": "U_kgDOBhx3Uw",
"avatar_url": "https://avatars.githubusercontent.com/u/102528851?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RaiAmanRai",
"html_url": "https://github.com/RaiAmanRai",
"followers_url": "https://api.github.com/users/RaiAmanRai/followers",
"following_url": "https://api.github.com/users/RaiAmanRai/following{/other_user}",
"gists_url": "https://api.github.com/users/RaiAmanRai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RaiAmanRai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RaiAmanRai/subscriptions",
"organizations_url": "https://api.github.com/users/RaiAmanRai/orgs",
"repos_url": "https://api.github.com/users/RaiAmanRai/repos",
"events_url": "https://api.github.com/users/RaiAmanRai/events{/privacy}",
"received_events_url": "https://api.github.com/users/RaiAmanRai/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[] | 1,673
| 1,673
| null |
NONE
| null |
### Model description
Hi, I was wondering if there is any way to take leverage of STOA models like [this](https://huggingface.co/facebook/bart-large-mnli) one by facebook and come up with a model for a low resource language, say Filipino using something like student-teacher methods.
My main aim is to come up with some **Zero-Shot models** for such languages with similar accuracy when used with languages like english.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21100/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21099
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21099/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21099/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21099/events
|
https://github.com/huggingface/transformers/pull/21099
| 1,530,544,862
|
PR_kwDOCUB6oc5HPLV8
| 21,099
|
[Time-Series] informer model
|
{
"login": "elisim",
"id": 17675462,
"node_id": "MDQ6VXNlcjE3Njc1NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/17675462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elisim",
"html_url": "https://github.com/elisim",
"followers_url": "https://api.github.com/users/elisim/followers",
"following_url": "https://api.github.com/users/elisim/following{/other_user}",
"gists_url": "https://api.github.com/users/elisim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elisim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elisim/subscriptions",
"organizations_url": "https://api.github.com/users/elisim/orgs",
"repos_url": "https://api.github.com/users/elisim/repos",
"events_url": "https://api.github.com/users/elisim/events{/privacy}",
"received_events_url": "https://api.github.com/users/elisim/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"very cool! Having a look shortly!",
"> very cool! Having a look shortly!\n\nWow you saw it fast! right now it's just the template of the vanilla TS transformer. \n\nBTW I sent you an email :)",
"Hi, @NielsRogge and @kashif 🙂\r\n\r\nMaybe you have an example for a conversion script?\r\nI'm following the [How to add a model to 🤗 Transformers?](https://huggingface.co/docs/transformers/add_new_model), section six \"Write a conversion script\":\r\n\r\n> Don’t hesitate to ask the Hugging Face team to point you to a similar already existing conversion script for your model.\r\n\r\nThank you so much,\r\nEli",
"thanks! having a look!",
"> thanks! having a look!\r\n\r\nWork is still in progress, but you might have a look if you have time :) \r\nAnd by the way, your implemention and the vanilla TS are helping me a lot!",
"Hi @kashif, I fixed the final attention output of ProbSparseAttention, and added the ProbMask. In more detail:\r\n\r\n# Major\r\n1. Added calculation of the final `attn_output` using `v_aggregated`, meaning steps 7 & 8 in the following:\r\n\r\n<img width=\"446\" alt=\"image\" src=\"https://user-images.githubusercontent.com/17675462/218316971-ce4dbe9e-d677-48e6-9184-0043ca179e6a.png\">\r\n\r\n**Reference:** [Informer paper](https://arxiv.org/abs/2012.07436), **Section:** \"Implement of the ProbSparse self-attention\"\r\n\r\n2. Added ProbMask, function name `_prepare_decoder_prob_attention_mask`.\r\n\r\n\r\n# Minor \r\n1. Comment-in attention dropout in `ProbSparseAttention` since the original impl didn't use it.\r\n2. Removed `attention_mask` for the encoder, since the original impl don't apply it to the encoder, only for the decoder.\r\n3. Added `self.attn = config.attn` in the decoder, to later check if to create ProbMask or the standard casual mask.\r\n4. Removed unused code from the original impl.\r\n\r\nNow the tests are falling mostly because assertion errors. Before continuing to fix them, I would appreciate if you can have a look :)\r\n\r\nThanks,\r\nEli",
"_The documentation is not available anymore as the PR was closed or merged._",
"thanks @sgugger will get it fixed!",
"@kashif fixed what I could from Sylvain comments.\r\nThe main thing is that some tests are breaking after this fix https://github.com/huggingface/transformers/pull/21099/commits/b4cbddfa05e3bd739b79569cd3c3b89e316f2451\r\n",
"@sgugger I have fixed all the assert issues in the PR #21846 and will fix the copies here when that gets merged"
] | 1,673
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Adding Time Series Informer model https://arxiv.org/abs/2012.07436
Related issue: #20903
@kashif :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21099/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21099",
"html_url": "https://github.com/huggingface/transformers/pull/21099",
"diff_url": "https://github.com/huggingface/transformers/pull/21099.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21099.patch",
"merged_at": 1678221399000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21098
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21098/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21098/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21098/events
|
https://github.com/huggingface/transformers/pull/21098
| 1,530,411,684
|
PR_kwDOCUB6oc5HOuZ3
| 21,098
|
TokenGT for graph classification
|
{
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@Raman-Kumar Here is a first draft to get you started!\r\n\r\nI suggest you start by finding a checkpoint, then try to compare the execution step by step with the original model to make sure results are the same (I can provide you with the script I used for Graphormer if you need). I also added some todos in the code, which can help you get started too! Feel free to compare everything with the Graphormer PR to get an idea of the process and things to do!",
"_The documentation is not available anymore as the PR was closed or merged._",
"👏 Okay doing that ...\r\n@clefourrier may need a little relaxed timeline, studying a few more things",
"@Raman-Kumar Take the time you need, there is no urgency on my side; feel free to ping me if you need help later on!",
"Sorry for accidentally closing it",
"Closing and replacing with #21745 21745"
] | 1,673
| 1,677
| 1,677
|
MEMBER
| null |
# What does this PR do?
Adds the TokenGT model for graph classification in Transformers.
Done:
- [x] Architecture ported
- [x] Collator (the model has no tokenizer) and preprocessing
Todo:
- [ ] Test results against original implementation, to make sure they are within precision range. Edit: exactly same results :fire:
- [ ] Add checkpoints and make sure they load properly
- [ ] Update doc
- [ ] Update test suite
- [ ] Add model card for the checkpoints once added
## Dependencies
Cython - this could be ported to Python, but preprocessing will be considerably slower, as well as collation if preprocessing is done on the fly.
Linked to #21079
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (Discussed on Slack)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Not tagging anyone for now as this is a draft.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21098/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21098/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21098",
"html_url": "https://github.com/huggingface/transformers/pull/21098",
"diff_url": "https://github.com/huggingface/transformers/pull/21098.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21098.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21097
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21097/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21097/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21097/events
|
https://github.com/huggingface/transformers/issues/21097
| 1,530,080,941
|
I_kwDOCUB6oc5bMy6t
| 21,097
|
Errors when using transformers dev install
|
{
"login": "samuelzxu",
"id": 14795989,
"node_id": "MDQ6VXNlcjE0Nzk1OTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/14795989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuelzxu",
"html_url": "https://github.com/samuelzxu",
"followers_url": "https://api.github.com/users/samuelzxu/followers",
"following_url": "https://api.github.com/users/samuelzxu/following{/other_user}",
"gists_url": "https://api.github.com/users/samuelzxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuelzxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuelzxu/subscriptions",
"organizations_url": "https://api.github.com/users/samuelzxu/orgs",
"repos_url": "https://api.github.com/users/samuelzxu/repos",
"events_url": "https://api.github.com/users/samuelzxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuelzxu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is not an error message, but a warning. It comes from TensorFlow warning you you don't have any GPU installed. This is nothing linked to Transformers.",
"Thanks! I see now that it was just taking a very long time to run, so I assumed the message was an error.",
"This is actually quite an odd error, though - I don't get it when I run `fix_copies`, and I can't think of why TF would even be imported in that script. Marking this as 'mysterious', and I'll come back to it if I ever figure out why it was happening to you!",
"These warnings also happen to me when I dev install transformers on Google Colab.\r\n\r\nMoreover, it seems that the first generation with `model.generate` is slower when this happens. There is a freeze of a few seconds in this first inference."
] | 1,673
| 1,676
| 1,673
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.15.0-57-generic-x86_64-with-glibc2.35
- Python version: 3.9.0
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.0 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger @Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. `python -m pip uninstall transformers`
2. `python -m pip install -e ".[dev]"`
3. `make fix-copies`
Here's the error message when I run the copy checking function:
```
$ python utils/check_copies.py --fix_and_overwrite
2023-01-11 23:05:36.590052: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-11 23:05:36.726808: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-01-11 23:05:36.726831: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2023-01-11 23:05:37.362737: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-01-11 23:05:37.362802: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-01-11 23:05:37.362811: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
```
I played around with it and a similar error occurs whenever I try to import transformers:
```
>>> import transformers
2023-01-11 23:21:57.034819: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-11 23:21:57.183106: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-01-11 23:21:57.183131: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2023-01-11 23:21:57.863217: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-01-11 23:21:57.863294: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-01-11 23:21:57.863305: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
```
And after some more poking, it looks like whenever I use Tensorflow (but not Pytorch) this error pops up:
```
>>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
2023-01-11 23:26:25.903590: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:267] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2023-01-11 23:26:25.903681: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ziggy-ThinkPad-T480): /proc/driver/nvidia/version does not exist
2023-01-11 23:26:25.905701: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
```
I have no idea why it keeps trying to use CUDA related functions when I don't have a GPU. Does the dev install assume that I have a GPU?
### Expected behavior
I expect the script to output something of this sort:
```
python utils/check_copies.py --fix_and_overwrite
Detected changes, rewriting src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py.
```
The above output was from a colab notebook (with gpu) that I installed the dev environment onto.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21097/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21096
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21096/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21096/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21096/events
|
https://github.com/huggingface/transformers/pull/21096
| 1,530,005,919
|
PR_kwDOCUB6oc5HNWwY
| 21,096
|
WIP: Added basic eos token based pooling
|
{
"login": "isamu-isozaki",
"id": 23430101,
"node_id": "MDQ6VXNlcjIzNDMwMTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/23430101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isamu-isozaki",
"html_url": "https://github.com/isamu-isozaki",
"followers_url": "https://api.github.com/users/isamu-isozaki/followers",
"following_url": "https://api.github.com/users/isamu-isozaki/following{/other_user}",
"gists_url": "https://api.github.com/users/isamu-isozaki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isamu-isozaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isamu-isozaki/subscriptions",
"organizations_url": "https://api.github.com/users/isamu-isozaki/orgs",
"repos_url": "https://api.github.com/users/isamu-isozaki/repos",
"events_url": "https://api.github.com/users/isamu-isozaki/events{/privacy}",
"received_events_url": "https://api.github.com/users/isamu-isozaki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Current informal tests to confirm that behavior isn't changed: \r\n\r\nIn original\r\n\r\n```\r\nfrom transformers import CLIPTokenizer, CLIPTextModel\r\ntokenizer=CLIPTokenizer.from_pretrained(\"CompVis/stable-diffusion-v1-4\", subfolder=\"tokenizer\")\r\ntext_encoder=CLIPTextModel.from_pretrained(\"CompVis/stable-diffusion-v1-4\", subfolder=\"text_encoder\").to(\"cuda\")\r\nencoded = text_encoder(**tokenizer([\"hi my name is bob\"], padding=True, return_tensors=\"pt\").to(\"cuda\"))\r\nencoded.pooler_output\r\n```\r\ngives\r\n```\r\ntensor([[ 4.5786e-01, 6.4382e-02, 6.3140e-01, 6.2113e-01, 9.6310e-01,\r\n -9.7793e-02, -7.2665e-01, -7.6867e-01, 6.6914e-02, 6.3608e-01,\r\n 1.4786e+00, -1.9321e-01, -1.1228e+00, 1.8028e+00, 8.4215e-01,\r\n -5.4223e-01, -6.0589e-01, 1.1507e+00, 1.3731e-01, 8.6263e-01,\r\n ...\r\n```",
"_The documentation is not available anymore as the PR was closed or merged._",
"Ok! Passed the first test and got the same result as above. The main problem I have with the current solution is we need to get the eos_token_id from the tokenizer like\r\n```\r\nencoded = text_encoder(**tokenizer([\"hi my name is bob\"], padding=True, return_tensors=\"pt\", eos_token_id=tokenizer.eos_token_id).to(\"cuda\"))\r\n```\r\nThere might be ways to save eos_token_id in the transformer part but for now, I think this does fix the problem. Will make more tests in the future and improve quality.",
"Hey, quick tip, you can also initialize a class variable `eos_token_id` for example, which can be fetch in the `config` at `__init__` time. \r\nNow regarding the issue, I have to ask why not use the same technique as in `CLIP`? You mention pros and cons, would you mind giving more details? \r\n",
"@ArthurZucker Thanks for the tip! Will fix that asap.\r\n\r\nAnd for the original implementation of CLIP, sorry let me clarify. I meant the original implementation of how CompVis/latent-diffusion trains on added tokens. In there, they force the placeholder token/added token to be a single token that is already within the embeddings during the textual inversion training. \r\nPro: With this approach, it'll work for the current implementation of clip \r\nCon: It's a pretty strict requirement that the current textual inv script in diffusers doesn't have. \r\n\r\nBut overall, I think this pr, once I fix things up, won't cause any problems to the existing implementations because we want to pool on the eos token anyway and it'll also end up working for the textual inversion scripts in diffusers which will be nice.",
"@ArthurZucker Hi! In the config, I noticed that the eos_token_id for the clip text model can be different from the tokenizer as follows\r\n```\r\nCLIPTextConfig {\r\n \"_name_or_path\": \"CompVis/stable-diffusion-v1-4\",\r\n \"architectures\": [\r\n \"CLIPTextModel\"\r\n ],\r\n \"attention_dropout\": 0.0,\r\n \"bos_token_id\": 0,\r\n \"dropout\": 0.0,\r\n \"eos_token_id\": 2,\r\n \"hidden_act\": \"quick_gelu\",\r\n \"hidden_size\": 768,\r\n \"initializer_factor\": 1.0,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-05,\r\n \"max_position_embeddings\": 77,\r\n \"model_type\": \"clip_text_model\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 1,\r\n \"projection_dim\": 512,\r\n \"torch_dtype\": \"float32\",\r\n \"transformers_version\": \"4.26.0.dev0\",\r\n \"vocab_size\": 49408\r\n}\r\n```\r\nis there some reason for this? I'll try testing out vocab_size-1(altho I don't think it's a good idea) for now",
"Will fix the checks.",
"Pulled from upstream so that some tests can pass. And changed a code bit and added documentation. I do think a better solution might be to fix the eos_token_id in the config. I'll try figuring out how to do that",
"Actually, this made the code diff a bit messy so closing this now and will make a new pr.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21096). All of your documentation changes will be reflected on that endpoint.",
"Also the reason for the possible difference between the `model.config.xxx_token_id` and `tokenizer.config.xxx_token_id` is because they are not linked together. We usually make sure that they have the same value but nothing is forcing that. Biggest reason I see is dependency, and simplicity since you could use other tokenizer with the same model and vice versa. 😉 ",
"Thanks for the comment! Will post q on the new pr just for other people who want to follow."
] | 1,673
| 1,674
| 1,674
|
NONE
| null |
# What does this PR do?
This PR is still a WIP. This is based on [this issue](https://github.com/huggingface/transformers/issues/21029). The main problem is that when new tokens are added to the tokenizer and text model and learned such as with [textual inversion](https://textual-inversion.github.io/) the clip text model pools at the wrong location as the pooling is not done at the new token location and not the eos token id location
Fixes #21029
## Before submitting
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Models:
- text models: @ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21096/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21096",
"html_url": "https://github.com/huggingface/transformers/pull/21096",
"diff_url": "https://github.com/huggingface/transformers/pull/21096.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21096.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21095
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21095/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21095/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21095/events
|
https://github.com/huggingface/transformers/pull/21095
| 1,530,005,566
|
PR_kwDOCUB6oc5HNWrq
| 21,095
|
Clarify and add missing typical_p argument docstring.
|
{
"login": "shermansiu",
"id": 12627125,
"node_id": "MDQ6VXNlcjEyNjI3MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shermansiu",
"html_url": "https://github.com/shermansiu",
"followers_url": "https://api.github.com/users/shermansiu/followers",
"following_url": "https://api.github.com/users/shermansiu/following{/other_user}",
"gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions",
"organizations_url": "https://api.github.com/users/shermansiu/orgs",
"repos_url": "https://api.github.com/users/shermansiu/repos",
"events_url": "https://api.github.com/users/shermansiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shermansiu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @gante as well "
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This adds an argument docstring of locally typical sampling (implemented in #15504) that was missing in `src/transformers/configuration_utils.py` and clarifies the existing docstring in `src/transformers/generation/configuration_utils.py`.
## Who can review?
@stevhliu @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21095/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21095",
"html_url": "https://github.com/huggingface/transformers/pull/21095",
"diff_url": "https://github.com/huggingface/transformers/pull/21095.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21095.patch",
"merged_at": 1673965428000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21094
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21094/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21094/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21094/events
|
https://github.com/huggingface/transformers/issues/21094
| 1,529,987,680
|
I_kwDOCUB6oc5bMcJg
| 21,094
|
device_map='auto' causes memory to not be freed with torch.cuda.empty_cache()
|
{
"login": "oobabooga",
"id": 112222186,
"node_id": "U_kgDOBrBf6g",
"avatar_url": "https://avatars.githubusercontent.com/u/112222186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oobabooga",
"html_url": "https://github.com/oobabooga",
"followers_url": "https://api.github.com/users/oobabooga/followers",
"following_url": "https://api.github.com/users/oobabooga/following{/other_user}",
"gists_url": "https://api.github.com/users/oobabooga/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oobabooga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oobabooga/subscriptions",
"organizations_url": "https://api.github.com/users/oobabooga/orgs",
"repos_url": "https://api.github.com/users/oobabooga/repos",
"events_url": "https://api.github.com/users/oobabooga/events{/privacy}",
"received_events_url": "https://api.github.com/users/oobabooga/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"That is an interesting bug, but should probably be adressed in `accelerate` as using `device_map = \"auto\"` is reliant on the accelerate library. ",
"I think it's the 8bit part that may be causing the issue, actually ;-) ",
"I was able to reproduce the issue with `accelerate` loaded models (without 8-bit):\r\n```\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel_8bit = AutoModelForCausalLM.from_pretrained(\"facebook/opt-350m\", device_map=\"auto\", load_in_8bit=True)\r\nmodel_accelerate = AutoModelForCausalLM.from_pretrained(\"facebook/opt-350m\", device_map=\"auto\", torch_dtype=torch.float16)\r\nmodel_torch = AutoModelForCausalLM.from_pretrained(\r\n \"facebook/opt-350m\", low_cpu_mem_usage=True, torch_dtype=torch.float16\r\n).cuda()\r\n\r\ndel model_accelerate\r\ntorch.cuda.empty_cache()\r\n\r\ndel model_8bit\r\ntorch.cuda.empty_cache()\r\n\r\ndel model_torch\r\ntorch.cuda.empty_cache()\r\n```\r\nWith this script the GPU VRAM is freed only after the lines:\r\n```\r\ndel model_torch\r\ntorch.cuda.empty_cache()\r\n```\r\nI also profiled the GPU memory and observed that the allocated memory decreases after the aforementioned line. \r\n<img width=\"1134\" alt=\"Screenshot 2023-01-16 at 11 06 14\" src=\"https://user-images.githubusercontent.com/49240599/212652122-796186e4-c476-42b6-b628-7def9d7ea3d0.png\">\r\nNote that the 8bit Linear modules seems to behave correctly with respect to `torch.cuda.empty_cache`, i.e:\r\n```\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\nimport bitsandbytes as bnb\r\n\r\nmodel_8bit = AutoModelForCausalLM.from_pretrained(\"facebook/opt-350m\", device_map=\"auto\", load_in_8bit=True)\r\nmodel_accelerate = AutoModelForCausalLM.from_pretrained(\"facebook/opt-350m\", device_map=\"auto\", torch_dtype=torch.float16)\r\nmodel_torch = AutoModelForCausalLM.from_pretrained(\r\n \"facebook/opt-350m\", low_cpu_mem_usage=True, torch_dtype=torch.float16\r\n).cuda()\r\n\r\nlinear_8bit = bnb.nn.Linear8bitLt(10000, 10000).to(\"cuda\")\r\n\r\ndel model_accelerate\r\ntorch.cuda.empty_cache()\r\n\r\ndel model_torch\r\ntorch.cuda.empty_cache()\r\n\r\ndel model_8bit\r\ntorch.cuda.empty_cache()\r\n\r\ndel linear_8bit\r\ntorch.cuda.empty_cache()\r\n```\r\nThe VRAM goes correctly down after the line:\r\n```\r\ndel linear_8bit\r\ntorch.cuda.empty_cache()\r\n```\r\nSo the issue should be on `accelerate` side but not sure where exactly, will investigate more but if you have more insights @sgugger @muellerzr would love to hear from you!",
"I have tried and can indeed reproduce without the 8bit loading. I don't know why the cache appears nonempty, but iterating on a loop (re-creating the model and then deleting it several times) does not result in a memory increase, so the memory is reused when we need it.\r\n\r\nIf someone manages to find more about this, I'd be very interested to learn why this is the case.",
"```\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\ngc.collect()\r\n```\r\nmay free memory.",
"I can confirm that ~690MB of seems to be freed (monitoring with `nvidia-smi`) - which corresponds to the size of the weights of opt-350 in fp16 thanks to the trick proposed by @git-longcat \r\n```\r\nimport torch\r\nimport gc\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel_accelerate = AutoModelForCausalLM.from_pretrained(\"facebook/opt-350m\", device_map={\"\":0}, torch_dtype=torch.float16)\r\nmodel_torch = AutoModelForCausalLM.from_pretrained(\r\n \"facebook/opt-350m\", low_cpu_mem_usage=True, torch_dtype=torch.float16\r\n).cuda()\r\n\r\ndel model_accelerate\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\n\r\ndel model_torch\r\ntorch.cuda.empty_cache()\r\n```\r\n@oobabooga can you try on your side and let us know if this trick works? \r\nIt seems crucial to call `gc.collect()` before `torch.cuda.empty_cache()` and not after ",
"@git-longcat @younesbelkada I confirm that this works. Thank you. \r\n\r\nIn fact, all I needed was\r\n\r\n```\r\nmodel = None\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\n\r\n```\r\n\r\nThis doesn't work:\r\n\r\n```\r\nmodel = None\r\ntorch.cuda.empty_cache()\r\ngc.collect()\r\n```\r\n\r\nThis resolves the issue for me. I don't know if it can still be considered a bug or not.",
"@younesbelkada @oobabooga in Accelerate dev now (install via `pip install git+https://github.com/huggingface/accelerate`) I've introduced a `release_memory` util that will perform the above for `n` objects easily:\r\n```python\r\n >>> import torch\r\n >>> from accelerate.utils import release_memory\r\n\r\n >>> a = torch.ones(1000, 1000).cuda()\r\n >>> b = torch.ones(1000, 1000).cuda()\r\n >>> a, b, model = release_memory(a, b, model)\r\n```\r\nJust ensure that the objects are in the same order they were passed in as so the memory will be fully overriden. (see https://github.com/huggingface/accelerate/pull/990 for more information)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### System Info
If I load a model like this
```
model = AutoModelForCausalLM.from_pretrained("models/opt-13b", device_map='auto', load_in_8bit=True)
```
and then do
```
model = None
torch.cuda.empty_cache()
```
the VRAM is not freed. The only way I have found to release it is to kill the program and start it over.
Freeing the memory like this works if the model is loaded the normal way with
```
model = AutoModelForCausalLM.from_pretrained("models/opt-13b", low_cpu_mem_usage=True, torch_dtype=torch.float16).cuda()
```
@ArthurZucker @younesbelkada
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code snippet is provided in the description above.
### Expected behavior
VRAM should be freed with `torch.cuda.empty_cache()`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21094/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21093
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21093/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21093/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21093/events
|
https://github.com/huggingface/transformers/issues/21093
| 1,529,908,580
|
I_kwDOCUB6oc5bMI1k
| 21,093
|
NaN when training "t5-small" with parallelize() on multiple GPUs
|
{
"login": "MrZhangKY",
"id": 44896606,
"node_id": "MDQ6VXNlcjQ0ODk2NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/44896606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MrZhangKY",
"html_url": "https://github.com/MrZhangKY",
"followers_url": "https://api.github.com/users/MrZhangKY/followers",
"following_url": "https://api.github.com/users/MrZhangKY/following{/other_user}",
"gists_url": "https://api.github.com/users/MrZhangKY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MrZhangKY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MrZhangKY/subscriptions",
"organizations_url": "https://api.github.com/users/MrZhangKY/orgs",
"repos_url": "https://api.github.com/users/MrZhangKY/repos",
"events_url": "https://api.github.com/users/MrZhangKY/events{/privacy}",
"received_events_url": "https://api.github.com/users/MrZhangKY/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"`parallelize()` is a deprecated function and should not be used. You should use the `accelerate` library see [here](https://github.com/huggingface/accelerate) ",
"@ArthurZucker \r\nThank you for your help. However, I use parallelize() because it can distribute layers of model to different GPUs, which is shown as followers:\r\n\r\n\r\nThere are some models in T5ForConditionalGeneration so big (such as t5-11b is 45G), so they are cant be put on a single GPU.\r\n\r\nBut when I use accelerate to distribute the model, I find it seems uses the data parallelism, and still put all layers of a model in a single GPU. Such as followers:\r\n\r\n\r\nCould you please tell me how can I write code which can train the model by model parallelism? Thank you!",
"Hi @MrZhangKY \r\nIn this case you can use `device_map='balanced'`, the script below worked for me (no NaN loss) on 2xNVIDIA T4 GPU:\r\n```python\r\n## Data\r\nsource = ['[ equal statistic job customer ostrich orange badger blue bull daisy giraffe hamster ivy rabbit possum whale cashew rectangle oval square pecan ] [ you orchid orange frog grey ivy racoon potato whale flax cylinder fennel pumpkin leopard ] [ desire otter mole albatross bat buffalo cat daisy grey hedgehog holly racoon squirrel potato whale apricot cylinder heart fennel raisin lilac ] [ action watch speak otter orange alligator bat bull clover daisy fish green horse racoon potato whale apricot rectangle circle fennel pecan lavender ] [ emotion ostrich orange alligator badger black cobra daisy fish green hazel horse squirrel potato wolf rectangle oval fennel pecan leopard ]', '[ problem ostrich mule badger black bull fox green hazel ivy rabbit potato whale sunflower sphere circle triangle pecan lemur ] [ company frequency time visual laws owl mule alligator badger black clover deer donkey emu grey horse racoon possum whale apricot rectangle oval fennel pecan lavender ] [ lion otter orange baboon bull cat daisy flamingo green iris racoon possum vulture apricot rectangle fennel raisin lemur ] [ equal orchid orange blossom camel chameleon deer fish green jackal possum whale rectangle oval triangle pecan ] [ equal conversation speak orchid orange bat camel deer fish grey iris racoon possum whale flax rectangle triangle pumpkin lemur ]']\r\ntarget = ['Later on in a career, 300 people are clients. You have their attention. The only attention needed is to make up for attrition or to continue growth at the desired rate. With that being said, be on the lookout for me in some crazy Hawaiian shirts. Maybe a fun tie when I have to dress up.', 'To his detriment, I don’t remember anything else. I thought of the rule again a few days ago because of a Hawaiian style, Sandlot movie shirt. I was the person wearing it, and yes, I did receive all kinds of attention. Or maybe, it was the shirt. Either way people were talking to me.']\r\n\r\nmodelName = \"t5-small\"\r\n\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(modelName, model_max_length=512)\r\nsource_tokens = [tokenizer(i) for i in source]\r\ntarget_tokens = [tokenizer(i) for i in target]\r\n\r\n\r\n# Model & Optimizer\r\nfrom transformers import T5ForConditionalGeneration\r\nmodel = T5ForConditionalGeneration.from_pretrained(modelName, device_map=\"balanced\")\r\nprint(set(model.hf_device_map.values()))\r\n# model.parallelize() #Model Parallelism\r\n# model.to('cuda:0')\r\n\r\nimport torch\r\noptimizer = torch.optim.SGD(model.parameters(), lr=0.01)\r\n\r\n\r\n## Train\r\nif __name__ == '__main__':\r\n for epoch in range(10):\r\n for i in range(2):\r\n loss = model(input_ids=torch.tensor(source_tokens[i]['input_ids']).unsqueeze(0),\r\n attention_mask=torch.tensor(source_tokens[i]['attention_mask']).unsqueeze(0),\r\n labels=torch.tensor(target_tokens[i]['input_ids']).unsqueeze(0).to(0)).loss\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n print(loss)\r\n```\r\nMake sure to have your model dispatched by printing `set(model.hf_device_map.values())`, or you can manually inspect `set(model.hf_device_map)`\r\nIf you want to set a custom device map you can pass a dictionary such as:\r\n```\r\ncustom_device_map = {\r\n \"shared\": 0,\r\n \"encoder\": 0,\r\n \"decoder\": 1,\r\n \"decoder.embed_tokens\":0,\r\n \"lm_head\": 0,\r\n}\r\n```\r\nand pass it at initialization: `model = T5ForConditionalGeneration.from_pretrained(modelName, device_map=custom_device_map)`. Although note that you need to manually set `\"decoder.embed_tokens\":0,` since the `embed_tokens` are shared between the encoder and decoder, so you need to make sure they are on the same device (maybe this can be addressed in the future but I think this is intended - otherwise you would need 2 copies of the embedding layer even though they are the same).",
"@younesbelkada \r\nThank you for your help very much!\r\nHowever, I run the code by using 4*A6000, there are some error:\r\n```cmd\r\n{0, 1, 2}\r\ntensor(10.3775, grad_fn=<ToCopyBackward0>)\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[1], line 31\r\n 29 for epoch in range(10):\r\n 30 for i in range(2):\r\n---> 31 loss = model(input_ids=torch.tensor(source_tokens[i]['input_ids']).unsqueeze(0),\r\n 32 attention_mask=torch.tensor(source_tokens[i]['attention_mask']).unsqueeze(0),\r\n 33 labels=torch.tensor(target_tokens[i]['input_ids']).unsqueeze(0).to(0)).loss\r\n 34 loss.backward()\r\n 35 optimizer.step()\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)\r\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1191 # this function, and just call forward.\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:156, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 154 output = old_forward(*args, **kwargs)\r\n 155 else:\r\n--> 156 output = old_forward(*args, **kwargs)\r\n 157 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:1648, in T5ForConditionalGeneration.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1645 decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device)\r\n 1647 # Decode\r\n-> 1648 decoder_outputs = self.decoder(\r\n 1649 input_ids=decoder_input_ids,\r\n 1650 attention_mask=decoder_attention_mask,\r\n 1651 inputs_embeds=decoder_inputs_embeds,\r\n 1652 past_key_values=past_key_values,\r\n 1653 encoder_hidden_states=hidden_states,\r\n 1654 encoder_attention_mask=attention_mask,\r\n 1655 head_mask=decoder_head_mask,\r\n 1656 cross_attn_head_mask=cross_attn_head_mask,\r\n 1657 use_cache=use_cache,\r\n 1658 output_attentions=output_attentions,\r\n 1659 output_hidden_states=output_hidden_states,\r\n 1660 return_dict=return_dict,\r\n 1661 )\r\n 1663 sequence_output = decoder_outputs[0]\r\n 1665 # Set device for model parallelism\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)\r\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1191 # this function, and just call forward.\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:988, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 985 position_bias = None\r\n 986 encoder_decoder_position_bias = None\r\n--> 988 hidden_states = self.dropout(inputs_embeds)\r\n 990 for i, (layer_module, past_key_value) in enumerate(zip(self.block, past_key_values)):\r\n 991 layer_head_mask = head_mask[i]\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)\r\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1191 # this function, and just call forward.\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:151, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 149 @functools.wraps(old_forward)\r\n 150 def new_forward(*args, **kwargs):\r\n--> 151 args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)\r\n 152 if module._hf_hook.no_grad:\r\n 153 with torch.no_grad():\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:266, in AlignDevicesHook.pre_forward(self, module, *args, **kwargs)\r\n 261 for name, _ in named_module_tensors(\r\n 262 module, include_buffers=self.offload_buffers, recurse=self.place_submodules\r\n 263 ):\r\n 264 set_module_tensor_to_device(module, name, self.execution_device, value=self.weights_map[name])\r\n--> 266 return send_to_device(args, self.execution_device), send_to_device(kwargs, self.execution_device)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:131, in send_to_device(tensor, device, non_blocking)\r\n 128 def _has_to_method(t):\r\n 129 return hasattr(t, \"to\")\r\n--> 131 return recursively_apply(_send_to_device, tensor, device, non_blocking, test_type=_has_to_method)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:80, in recursively_apply(func, data, test_type, error_on_other_type, *args, **kwargs)\r\n 58 \"\"\"\r\n 59 Recursively apply a function on a data structure that is a nested list/tuple/dictionary of a given base type.\r\n 60 \r\n (...)\r\n 77 The same data structure as `data` with `func` applied to every object of type `main_type`.\r\n 78 \"\"\"\r\n 79 if isinstance(data, (tuple, list)):\r\n---> 80 return honor_type(\r\n 81 data,\r\n 82 (\r\n 83 recursively_apply(\r\n 84 func, o, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs\r\n 85 )\r\n 86 for o in data\r\n 87 ),\r\n 88 )\r\n 89 elif isinstance(data, Mapping):\r\n 90 return type(data)(\r\n 91 {\r\n 92 k: recursively_apply(\r\n (...)\r\n 96 }\r\n 97 )\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:51, in honor_type(obj, generator)\r\n 47 \"\"\"\r\n 48 Cast a generator to the same type as obj (list, tuple or namedtuple)\r\n 49 \"\"\"\r\n 50 try:\r\n---> 51 return type(obj)(generator)\r\n 52 except TypeError:\r\n 53 # Some objects may not be able to instantiate from a generator directly\r\n 54 return type(obj)(*list(generator))\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:83, in <genexpr>(.0)\r\n 58 \"\"\"\r\n 59 Recursively apply a function on a data structure that is a nested list/tuple/dictionary of a given base type.\r\n 60 \r\n (...)\r\n 77 The same data structure as `data` with `func` applied to every object of type `main_type`.\r\n 78 \"\"\"\r\n 79 if isinstance(data, (tuple, list)):\r\n 80 return honor_type(\r\n 81 data,\r\n 82 (\r\n---> 83 recursively_apply(\r\n 84 func, o, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs\r\n 85 )\r\n 86 for o in data\r\n 87 ),\r\n 88 )\r\n 89 elif isinstance(data, Mapping):\r\n 90 return type(data)(\r\n 91 {\r\n 92 k: recursively_apply(\r\n (...)\r\n 96 }\r\n 97 )\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:99, in recursively_apply(func, data, test_type, error_on_other_type, *args, **kwargs)\r\n 90 return type(data)(\r\n 91 {\r\n 92 k: recursively_apply(\r\n (...)\r\n 96 }\r\n 97 )\r\n 98 elif test_type(data):\r\n---> 99 return func(data, *args, **kwargs)\r\n 100 elif error_on_other_type:\r\n 101 raise TypeError(\r\n 102 f\"Can't apply {func.__name__} on object of type {type(data)}, only of nested list/tuple/dicts of objects \"\r\n 103 f\"that satisfy {test_type.__name__}.\"\r\n 104 )\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:124, in send_to_device.<locals>._send_to_device(t, device, non_blocking)\r\n 122 def _send_to_device(t, device, non_blocking):\r\n 123 try:\r\n--> 124 return t.to(device, non_blocking=non_blocking)\r\n 125 except TypeError: # .to() doesn't accept non_blocking as kwarg\r\n 126 return t.to(device)\r\n\r\nRuntimeError: CUDA error: device-side assert triggered\r\n```\r\nThe code I run:\r\n```python\r\nimport os\r\nos.environ['CUDA_LAUNCH_BLOCKING'] = '1'\r\n\r\n## Data\r\nsource = ['[ equal statistic job customer ostrich orange badger blue bull daisy giraffe hamster ivy rabbit possum whale cashew rectangle oval square pecan ] [ you orchid orange frog grey ivy racoon potato whale flax cylinder fennel pumpkin leopard ] [ desire otter mole albatross bat buffalo cat daisy grey hedgehog holly racoon squirrel potato whale apricot cylinder heart fennel raisin lilac ] [ action watch speak otter orange alligator bat bull clover daisy fish green horse racoon potato whale apricot rectangle circle fennel pecan lavender ] [ emotion ostrich orange alligator badger black cobra daisy fish green hazel horse squirrel potato wolf rectangle oval fennel pecan leopard ]', '[ problem ostrich mule badger black bull fox green hazel ivy rabbit potato whale sunflower sphere circle triangle pecan lemur ] [ company frequency time visual laws owl mule alligator badger black clover deer donkey emu grey horse racoon possum whale apricot rectangle oval fennel pecan lavender ] [ lion otter orange baboon bull cat daisy flamingo green iris racoon possum vulture apricot rectangle fennel raisin lemur ] [ equal orchid orange blossom camel chameleon deer fish green jackal possum whale rectangle oval triangle pecan ] [ equal conversation speak orchid orange bat camel deer fish grey iris racoon possum whale flax rectangle triangle pumpkin lemur ]']\r\ntarget = ['Later on in a career, 300 people are clients. You have their attention. The only attention needed is to make up for attrition or to continue growth at the desired rate. With that being said, be on the lookout for me in some crazy Hawaiian shirts. Maybe a fun tie when I have to dress up.', 'To his detriment, I don’t remember anything else. I thought of the rule again a few days ago because of a Hawaiian style, Sandlot movie shirt. I was the person wearing it, and yes, I did receive all kinds of attention. Or maybe, it was the shirt. Either way people were talking to me.']\r\n\r\nmodelName = \"t5-small\"\r\n\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(modelName, model_max_length=512)\r\nsource_tokens = [tokenizer(i) for i in source]\r\ntarget_tokens = [tokenizer(i) for i in target]\r\n\r\n\r\n# Model & Optimizer\r\nfrom transformers import T5ForConditionalGeneration\r\nmodel = T5ForConditionalGeneration.from_pretrained(modelName, device_map=\"balanced\")\r\nprint(set(model.hf_device_map.values()))\r\n# model.parallelize() #Model Parallelism\r\n# model.to('cuda:0')\r\n\r\nimport torch\r\noptimizer = torch.optim.SGD(model.parameters(), lr=0.01)\r\n\r\n\r\n## Train\r\nif __name__ == '__main__':\r\n for epoch in range(10):\r\n for i in range(2):\r\n loss = model(input_ids=torch.tensor(source_tokens[i]['input_ids']).unsqueeze(0),\r\n attention_mask=torch.tensor(source_tokens[i]['attention_mask']).unsqueeze(0),\r\n labels=torch.tensor(target_tokens[i]['input_ids']).unsqueeze(0).to(0)).loss\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n print(loss)\r\n```\r\nIs there any problem when using more than 2 GPUS?",
"Interesting, can you run the same script on CPU?\r\nWhenever you have a `RuntimeError: CUDA error: device-side assert triggered` a good practice is to run the same script on CPU and check the error message",
"@younesbelkada \r\nIts strange. When I change the environment to 2 GPUS, it works....",
"@younesbelkada \r\nI think there are some problems when using more that 2 GPUS(for example, 4 GPUS). Do you have plans to fix this problem?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This issue is still occuring on the newest transformers version: 4.26.1. I also managed to train on two GPU's, but when I increase number of GPU's, I get error \"RuntimeError: CUDA error: device-side assert triggered\"."
] | 1,673
| 1,677
| 1,676
|
NONE
| null |
### System Info
* Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.27
* Python version: 3.10.8
* transformers version: 4.25.1
* huggingface-hub version: 0.11.1
* PyTorch version (GPU?): pytorch_1.13.1-cuda11.6-cudnn8-runtime
* Using GPU in script?: yes
* Using distributed or parallel set-up in script?: yes
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When I using "t5-small" to generate target text from source text, If I set ```model.parallelize()``` before getting loss, the loss will be ```nan```. But If I just set ```model.cuda()```, the loss will be normal. Is there anything worng with the ```parallelize()``` function? Because as far as I know, pytorch does not need to do any special settings for backward() and parameters update when the model parallelism.
There is a toy sample:
```python
## Data
source = ['[ equal statistic job customer ostrich orange badger blue bull daisy giraffe hamster ivy rabbit possum whale cashew rectangle oval square pecan ] [ you orchid orange frog grey ivy racoon potato whale flax cylinder fennel pumpkin leopard ] [ desire otter mole albatross bat buffalo cat daisy grey hedgehog holly racoon squirrel potato whale apricot cylinder heart fennel raisin lilac ] [ action watch speak otter orange alligator bat bull clover daisy fish green horse racoon potato whale apricot rectangle circle fennel pecan lavender ] [ emotion ostrich orange alligator badger black cobra daisy fish green hazel horse squirrel potato wolf rectangle oval fennel pecan leopard ]', '[ problem ostrich mule badger black bull fox green hazel ivy rabbit potato whale sunflower sphere circle triangle pecan lemur ] [ company frequency time visual laws owl mule alligator badger black clover deer donkey emu grey horse racoon possum whale apricot rectangle oval fennel pecan lavender ] [ lion otter orange baboon bull cat daisy flamingo green iris racoon possum vulture apricot rectangle fennel raisin lemur ] [ equal orchid orange blossom camel chameleon deer fish green jackal possum whale rectangle oval triangle pecan ] [ equal conversation speak orchid orange bat camel deer fish grey iris racoon possum whale flax rectangle triangle pumpkin lemur ]']
target = ['Later on in a career, 300 people are clients. You have their attention. The only attention needed is to make up for attrition or to continue growth at the desired rate. With that being said, be on the lookout for me in some crazy Hawaiian shirts. Maybe a fun tie when I have to dress up.', 'To his detriment, I don’t remember anything else. I thought of the rule again a few days ago because of a Hawaiian style, Sandlot movie shirt. I was the person wearing it, and yes, I did receive all kinds of attention. Or maybe, it was the shirt. Either way people were talking to me.']
modelName = "t5-small"
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(modelName, model_max_length=512)
source_tokens = [tokenizer(i) for i in source]
target_tokens = [tokenizer(i) for i in target]
# Model & Optimizer
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained(modelName)
model.parallelize() #Model Parallelism
# model.to('cuda:0')
import torch
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
## Train
if __name__ == '__main__':
for epoch in range(10):
for i in range(2):
loss = model(input_ids=torch.tensor(source_tokens[i]['input_ids']).unsqueeze(0).to('cuda:0'),
attention_mask=torch.tensor(source_tokens[i]['attention_mask']).unsqueeze(0).to('cuda:0'),
labels=torch.tensor(target_tokens[i]['input_ids']).unsqueeze(0).to('cuda:0')).loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
print(loss)
```
### Expected behavior
The loss shouldnt be nan when set ```model.parallelize()```, it should be the same as when set ```model.to('cuda:0')```.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21093/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21092
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21092/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21092/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21092/events
|
https://github.com/huggingface/transformers/issues/21092
| 1,529,794,552
|
I_kwDOCUB6oc5bLs_4
| 21,092
|
Add Epsilon and Eta sampling
|
{
"login": "shermansiu",
"id": 12627125,
"node_id": "MDQ6VXNlcjEyNjI3MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shermansiu",
"html_url": "https://github.com/shermansiu",
"followers_url": "https://api.github.com/users/shermansiu/followers",
"following_url": "https://api.github.com/users/shermansiu/following{/other_user}",
"gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions",
"organizations_url": "https://api.github.com/users/shermansiu/orgs",
"repos_url": "https://api.github.com/users/shermansiu/repos",
"events_url": "https://api.github.com/users/shermansiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shermansiu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"Yesterday's ACL 2023 tutorial on \"Generating Text from Large Language Models\" covers eta-sampling and more! John Hewitt, the first author of the eta-sampling paper, was one of the presenters for that tutorial!\r\n\r\nSite: https://rycolab.io/classes/acl-2023-tutorial/\r\nSlides: https://drive.google.com/file/d/1UHbGcjzBURG1n2DufC7iDTmGNjIz5Dp_/view"
] | 1,673
| 1,689
| 1,673
|
CONTRIBUTOR
| null |
### Feature request
I would like to add Epsilon and Eta sampling from [Truncation Sampling as Language Model Desmoothing](https://arxiv.org/abs/2210.15191). They are novel truncation sampling decoding algorithms that have led to better human judgement scores and less repetition than nucleus sampling.
- [Official repository](https://github.com/john-hewitt/truncation-sampling)
### Motivation
I would like to generate more human-like and less repetitious text with Huggingface models.
### Your contribution
I am able to submit a PR for this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21092/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21092/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21091
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21091/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21091/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21091/events
|
https://github.com/huggingface/transformers/pull/21091
| 1,529,457,008
|
PR_kwDOCUB6oc5HLeGI
| 21,091
|
Support for custom model transform in `PreTrainedModel.from_pretrained`
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21091). All of your documentation changes will be reflected on that endpoint.",
"I see the point, but then isn't bitsandbytes integration itself against the transformers philosophy as well? Why was bitsandbytes integrated in `from_pretrained`? This PR allows as an opt-in to plug external code in transformers, it does not change the one-model-one-file experience for existing models using the vanilla code.\r\n\r\nI'd see this kind of modularity as a compromise to allow to plug in transformers modifications to the modeling that users may want to do programmatically, but that we don't want to host in the canonical modeling implementations.\r\n\r\nEdit: Ok so from our offline discussion, I understand that the idea in this PR breaks the \"load from transformers\" on the Hub, which is something we want to avoid. Model loading should work out of the box from the Hub. An alternative idea is to have a quantization config directly in the model's config, and do the transform from there. I'll have a look at this.",
"To fully put the result of our offline discussion in the open, the idea would be to:\r\n- migrate from several arguments (like `load_in_8bit=True`) in `from_pretrained` to a quantization config as there are many other quantization methods we would like to support.\r\n- when we quantize the model in `from_pretrained`, set it in the config (so for now indicate with flags which 8bit parameter have been used and when we migrate to a quantization config, put it in the model config)\r\n- this way when the quantized model is pushed to the Hub, the information about how it was quantized is there. We can thus adapt the `from_pretrained` method to prepare the model just after its instantation and before the state dict is loaded using this information.\r\n\r\nI'll also add this to the agenda of our next core maintainer meeting to make sure we are all fully aligned on this :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,676
| 1,676
|
COLLABORATOR
| null |
Add the possibility to pass a custom transform with signature `(model: torch.nn.modules.module.Module, **kwargs) -> None` that can do any transform on the model.
This can be seen as an extension of the bitsandbytes integration to be able to pass any transform that e.g. modify the model's keys. A direct application is an easier integration of SmoothQuant, k-bit quantization in transformers. Defining the transform should be left to an external library.
Some other necessary code modifs could be missing, I see the bitsandbytes integration modified a bit more.
This is just a draft to see if it's a good idea or not.
## Before submitting
- [ ] write doc
- [ ] write tests
- [ ] check if it works with fx.GraphModule
## Who can review?
@sgugger I'd be happy to have your opinion on this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21091/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21091",
"html_url": "https://github.com/huggingface/transformers/pull/21091",
"diff_url": "https://github.com/huggingface/transformers/pull/21091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21091.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21090
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21090/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21090/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21090/events
|
https://github.com/huggingface/transformers/pull/21090
| 1,529,453,000
|
PR_kwDOCUB6oc5HLdOs
| 21,090
|
Add: An introductory guide for text generation
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi! Thank you all for the invaluable feedback! I have addressed all the comments and suggestions. Please let me know if there's anything else I need to improve or if we can merge this PR and https://github.com/huggingface/transformers/pull/21112. ",
"I think both can be merged. Thank you @MKhalusova ❤️ "
] | 1,673
| 1,674
| 1,673
|
CONTRIBUTOR
| null |
Context: current [text generation doc](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation) is large, difficult to navigate, and may be overwhelming to users who are new to text generation.
I suggest splitting the document into two parts:
1. an introductory guide that explains the basics and provides some examples
2. trimmed down API reference to be used for looking up specific parameter descriptions (https://github.com/huggingface/transformers/pull/21112)
This PR is the first part. It adds a guide that introduces readers to the text generation strategies, explains the generation configuration defaults, and provides some examples. The second part will follow in a separate PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21090/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21090",
"html_url": "https://github.com/huggingface/transformers/pull/21090",
"diff_url": "https://github.com/huggingface/transformers/pull/21090.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21090.patch",
"merged_at": 1673976202000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21089
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21089/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21089/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21089/events
|
https://github.com/huggingface/transformers/issues/21089
| 1,529,295,034
|
I_kwDOCUB6oc5bJzC6
| 21,089
|
Different behavior in DistilBERT when using "inputs_embeds"
|
{
"login": "DiegoOrtego",
"id": 24732433,
"node_id": "MDQ6VXNlcjI0NzMyNDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/24732433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DiegoOrtego",
"html_url": "https://github.com/DiegoOrtego",
"followers_url": "https://api.github.com/users/DiegoOrtego/followers",
"following_url": "https://api.github.com/users/DiegoOrtego/following{/other_user}",
"gists_url": "https://api.github.com/users/DiegoOrtego/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DiegoOrtego/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DiegoOrtego/subscriptions",
"organizations_url": "https://api.github.com/users/DiegoOrtego/orgs",
"repos_url": "https://api.github.com/users/DiegoOrtego/repos",
"events_url": "https://api.github.com/users/DiegoOrtego/events{/privacy}",
"received_events_url": "https://api.github.com/users/DiegoOrtego/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey, can you provide a reproducing script ? ",
"@ArthurZucker I don't think that I'll have soon time to prepare a reproducing script. But, I'm happy to give more details on the issue found.\r\n\r\nIn distilbert when input_embeds is not None, the self.embedding layer is skipped completely:\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/distilbert/modeling_distilbert.py\r\n \r\n```\r\n if inputs_embeds is None:\r\n inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim)\r\n return self.transformer(\r\n x=inputs_embeds,\r\n attn_mask=attention_mask,\r\n head_mask=head_mask,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n```\r\n\r\nHowever, in BERT the self.embedding() call always happens and does the following:\r\nhttps://github.com/huggingface/transformers/blob/7d2a5fa749d22f403fe6ceac7d62c003240aee45/src/transformers/models/bert/modeling_bert.py\r\n\r\n```\r\n\r\n if inputs_embeds is None:\r\n inputs_embeds = self.word_embeddings(input_ids)\r\n token_type_embeddings = self.token_type_embeddings(token_type_ids)\r\n\r\n embeddings = inputs_embeds + token_type_embeddings\r\n if self.position_embedding_type == \"absolute\":\r\n position_embeddings = self.position_embeddings(position_ids)\r\n embeddings += position_embeddings\r\n embeddings = self.LayerNorm(embeddings)\r\n embeddings = self.dropout(embeddings)\r\n```\r\n\r\nSo, when passing input_embeds the call to self.word_embeddings() does not take place, so assuming that input_embeds are already word embbedings, but positional embedding addition, layer normalization and dropout happen after.\r\n\r\nIn summary, passing input_embeds to BERT (and other architectures) assumes that the input_embeds are the word embeddings. However, in DistilBERT if you pass the word embeddings as input_embeds you would be skipping adding positional embeddings, layer norm and dropout. \r\nThe docs do not say anything about this difference and does not seem reasonable having to do the skipped operations manually before passing input_embeds to distilBERT.\r\n\r\n\r\n\r\n\r\n",
"Okay, this does not really need a reproduction script and I agree with you, the expected behaviour is that if you pre-compute the `input_embeds`, the output of the model should certainly be the same as if you were passing the input ids. This is true for most of our models. I have to check whether there is a particular reason for this in DistillBert, otherwise will push a fix! ",
"Great news @ArthurZucker !"
] | 1,673
| 1,677
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-3.10.0-1160.80.1.0.1.el7.x86_64-x86_64-with-glibc2.27
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (True)
@ArthurZucker @younesbelkada
### Reproduction
Do a forward pass with distilBERT passing "inputs_embs" instead of "input_ids", where "inputs_embs" contains the output of the forward over the word embedding matrix, i.e. just picking the token embeddings.
When doing this, one would expect the same behaviour as other popular models like BERT or ROBERTA, but distilBERT skips "positional embeddings addition + layerNorm + droput" as it skips the self.embedding() call.
I would expect that passing inputs_embs instead of input_ids has a coherent behaviour across diferent architectures, but it is not the case, at least for distilBERT. I am not sure if other models have this issue.
### Expected behavior
Properly build the input embedding by adding positional embeddings + layerNorm + droput (as happens in modeling_bert or modeling_roberta). This does not happen as the call to self.embedding() is skipped.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21089/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21088
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21088/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21088/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21088/events
|
https://github.com/huggingface/transformers/pull/21088
| 1,529,056,666
|
PR_kwDOCUB6oc5HKHPO
| 21,088
|
fix typo in comment
|
{
"login": "soulseen",
"id": 16031013,
"node_id": "MDQ6VXNlcjE2MDMxMDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/16031013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soulseen",
"html_url": "https://github.com/soulseen",
"followers_url": "https://api.github.com/users/soulseen/followers",
"following_url": "https://api.github.com/users/soulseen/following{/other_user}",
"gists_url": "https://api.github.com/users/soulseen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soulseen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soulseen/subscriptions",
"organizations_url": "https://api.github.com/users/soulseen/orgs",
"repos_url": "https://api.github.com/users/soulseen/repos",
"events_url": "https://api.github.com/users/soulseen/events{/privacy}",
"received_events_url": "https://api.github.com/users/soulseen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
Signed-off-by: xiaoyang zhu <zhuxiaoyang1996@gmail.com>
fix typo in comment
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21088/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21088",
"html_url": "https://github.com/huggingface/transformers/pull/21088",
"diff_url": "https://github.com/huggingface/transformers/pull/21088.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21088.patch",
"merged_at": 1673455901000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21087
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21087/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21087/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21087/events
|
https://github.com/huggingface/transformers/issues/21087
| 1,528,969,839
|
I_kwDOCUB6oc5bIjpv
| 21,087
|
GIT batch prediction seems to be broken
|
{
"login": "roelschr",
"id": 19557581,
"node_id": "MDQ6VXNlcjE5NTU3NTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/19557581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roelschr",
"html_url": "https://github.com/roelschr",
"followers_url": "https://api.github.com/users/roelschr/followers",
"following_url": "https://api.github.com/users/roelschr/following{/other_user}",
"gists_url": "https://api.github.com/users/roelschr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roelschr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roelschr/subscriptions",
"organizations_url": "https://api.github.com/users/roelschr/orgs",
"repos_url": "https://api.github.com/users/roelschr/repos",
"events_url": "https://api.github.com/users/roelschr/events{/privacy}",
"received_events_url": "https://api.github.com/users/roelschr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi,\r\n\r\nIf you want to run inference on individual frames, you'll need to use a model that expects individual frames, not videos.\r\n\r\nHere you're loading [microsoft/git-base-vatex](https://huggingface.co/microsoft/git-base-vatex), hence, it expects `pixel_values` of shape (batch_size, num_frames, num_channels, height, width).\r\n\r\nTo run inference on a batch of images, you can use models which are trained on image captioning datasets, like [microsoft/git-base](https://huggingface.co/microsoft/git-base), [microsoft/git-base-coco](https://huggingface.co/microsoft/git-base-coco), [microsoft/git-base-textcaps](https://huggingface.co/microsoft/git-base-textcaps) (as well as any of the large variants).\r\n\r\nEdit; after investigating it still seems like there's an error. Looking into this",
"Sorry, I should have also shown that it doesn't work on captioning models (even though I have tested it on my side, both `git-base-coco` and `git-large-coco`). My bad!\r\n\r\nI appreciate that you're looking into this 🙏 ",
"For some reason Github isn't automatically linking the PR that will fix it: #21071 ",
"Update: seems that the PR above doesn't fix it. So issue remains open",
"Ok figured this out! The problem is that you're not passing `input_ids` of the same batch size. By default, the generate method will just use the start token ID (which for GIT equals model.config.bos_token_id = 101). However when sending a batch of images through the model, we also need to prepare a batch of start tokens.\r\n\r\nThe following works:\r\n\r\n```\r\nfrom transformers import AutoProcessor, AutoModelForCausalLM\r\n\r\nimport requests\r\nfrom PIL import Image\r\nimport torch\r\n\r\nprocessor = AutoProcessor.from_pretrained(\"microsoft/git-base-coco\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/git-base-coco\")\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\npixel_values = processor(images=image, return_tensors=\"pt\").pixel_values\r\n\r\npixel_values = torch.stack([pixel_values, pixel_values], dim=0).squeeze()\r\n\r\nstart_token_id = model.config.bos_token_id\r\n\r\ngenerated_ids = model.generate(pixel_values=pixel_values, input_ids=torch.tensor([[start_token_id], [start_token_id]]), max_length=50)\r\ngenerated_captions = processor.batch_decode(generated_ids, skip_special_tokens=True)\r\nprint(generated_captions)\r\n```\r\nI'll add a corresponding test to make sure this is tested."
] | 1,673
| 1,674
| 1,674
|
NONE
| null |
### System Info
```
name : transformers
version : 4.26.0.dev0
```
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm trying to run image captioning in batches. The easiest way to try that was to change the example for video captioning [here](https://huggingface.co/docs/transformers/main/en/model_doc/git#transformers.GitForCausalLM.forward.example-3). According to [source code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/git/modeling_git.py#L1234), `pixel_values` must be in the shape of `(batch_size, num_frames, num_channels, height, width)` or `(batch_size, num_channels, height, width)`. But reshaping the `pixel_values` from the example to turn video captioning into batch image captioning throws the following error:
```RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 6 but got size 1 for tensor number 1 in the list.```
during `hidden_states = torch.cat((projected_visual_features, embedding_output), dim=1)` (line 1268 of modeling_git.py).
Here is the code for reproducibility:
```python
from transformers import AutoProcessor, AutoModelForCausalLM
from PIL import Image
import numpy as np
from huggingface_hub import hf_hub_download
from decord import VideoReader, cpu
processor = AutoProcessor.from_pretrained("microsoft/git-base-vatex")
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-vatex")
# set seed for reproducability
np.random.seed(45)
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
converted_len = int(clip_len * frame_sample_rate)
end_idx = np.random.randint(converted_len, seg_len)
start_idx = end_idx - converted_len
indices = np.linspace(start_idx, end_idx, num=clip_len)
indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
return indices
def sample_frames(file_path, num_frames):
videoreader = VideoReader(file_path, num_threads=1, ctx=cpu(0))
videoreader.seek(0)
indices = sample_frame_indices(clip_len=num_frames, frame_sample_rate=4, seg_len=len(videoreader))
frames = videoreader.get_batch(indices).asnumpy()
return list(frames)
# load video
file_path = hf_hub_download(
repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
)
# sample frames
num_frames = model.config.num_image_with_embedding
print(num_frames)
frames = sample_frames(file_path, num_frames)
# pixel_values = processor(images=frames, return_tensors="pt").pixel_values.reshape((num_frames, 1, 3, 224, 224))
pixel_values = processor(images=frames, return_tensors="pt").pixel_values.reshape((num_frames, 3, 224, 224))
print(pixel_values.size())
generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
print("Generated caption:", processor.batch_decode(generated_ids, skip_special_tokens=True))
```
### Expected behavior
Expected it to generate ids per image in the batch.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21087/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21087/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21086
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21086/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21086/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21086/events
|
https://github.com/huggingface/transformers/issues/21086
| 1,528,842,842
|
I_kwDOCUB6oc5bIEpa
| 21,086
|
AutoModels for region-to-phrase-alignment and natural-language-for-visual-reasoning
|
{
"login": "mszsorondo",
"id": 52178350,
"node_id": "MDQ6VXNlcjUyMTc4MzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/52178350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mszsorondo",
"html_url": "https://github.com/mszsorondo",
"followers_url": "https://api.github.com/users/mszsorondo/followers",
"following_url": "https://api.github.com/users/mszsorondo/following{/other_user}",
"gists_url": "https://api.github.com/users/mszsorondo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mszsorondo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mszsorondo/subscriptions",
"organizations_url": "https://api.github.com/users/mszsorondo/orgs",
"repos_url": "https://api.github.com/users/mszsorondo/repos",
"events_url": "https://api.github.com/users/mszsorondo/events{/privacy}",
"received_events_url": "https://api.github.com/users/mszsorondo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Note that the ONNX conversion is now done directly in the optimum library, so this is probably where you would need to add something.",
"That's true, I'm doing the ONNX conversion from optimum. But optimum references AutoModels directly from transformers. I'd just program the AutoModel subclasses without any optimum stuff, of course.",
"@sgugger Sorry for insisting, I know you have lots of issues opened already. Just tagging you to see if you still think this should be solved from Optimum",
"I think the optimum library should provide an API for models that don't have an auto API yes (if it does not already), as there will always be such models and we won't add a new auto class for just one model.\r\n\r\ncc @michaelbenayoun who might have more information.",
"Hi @mszsorondo,\r\nCould you open a PR on the Optimum repo, with your request explained please?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,676
| 1,676
|
NONE
| null |
### Feature request
Hi! Region to phrase alignment and natural language for visual reasoning have no AutoModels yet. @sgugger is it OK to open a PR and solve this?
### Motivation
This is an issue that I faced when exporting VisualBert to ONNX, since the task mapping can't be done without it. Adding it would allow us to make the export for every task.
### Your contribution
I'll open a PR and solve it myself if allowed
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21086/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.