url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/21085
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21085/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21085/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21085/events
|
https://github.com/huggingface/transformers/issues/21085
| 1,528,667,280
|
I_kwDOCUB6oc5bHZyQ
| 21,085
|
`import decord` crashes Python kernel/process when moving X-CLIP or other video classification models to CUDA GPU
|
{
"login": "e-caste",
"id": 48513706,
"node_id": "MDQ6VXNlcjQ4NTEzNzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/48513706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-caste",
"html_url": "https://github.com/e-caste",
"followers_url": "https://api.github.com/users/e-caste/followers",
"following_url": "https://api.github.com/users/e-caste/following{/other_user}",
"gists_url": "https://api.github.com/users/e-caste/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-caste/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-caste/subscriptions",
"organizations_url": "https://api.github.com/users/e-caste/orgs",
"repos_url": "https://api.github.com/users/e-caste/repos",
"events_url": "https://api.github.com/users/e-caste/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-caste/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Unstale",
"cc @amyeroberts "
] | 1,673
| 1,677
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-6.0.12-76060006-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger for the docs and @NielsRogge for X-CLIP specifically.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I was trying to run inference on GPU with X-CLIP on my own dataset. To do so I followed the [example code](https://huggingface.co/docs/transformers/main/en/model_doc/xclip#transformers.XCLIPModel.forward.example) in the docs, which uses decord as library to load videos into memory. I tested it and it ran perfectly, _but_ I noticed the model was on CPU.
A simple `model.to(torch.device("cuda"))` ought to do the trick, right? Well no, here is the rabbit hole I went down: https://github.com/huggingface/transformers/issues/21054.
My conclusion is that a simple `import decord` before trying to load the model into GPU memory is enough to make the `python` process crash, be it in the terminal or a Jupyter kernel.
This is both with the latest decord version from PyPI (which can only run on CPU) and with the latest decord version compiled from source with CUDA enabled.
To fix this, I used the [code](https://colab.research.google.com/gist/nateraw/c327cb6ff6b074e6ddc8068d19c0367d/pyav-io.ipynb#scrollTo=fzGRpWaUqnTL) generously provided by @nateraw built on pyAV, while discussing how to integrate videos in the datasets library (https://github.com/huggingface/datasets/issues/5225).
### Expected behavior
The model should be transferable to the GPU without issue.
To instruct people on how to do so, the docs should be updated to make use of pyAV instead of decord to avoid sending other users into hours if not days of debugging.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21085/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/21085/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21084
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21084/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21084/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21084/events
|
https://github.com/huggingface/transformers/pull/21084
| 1,528,664,729
|
PR_kwDOCUB6oc5HIytb
| 21,084
|
Add Japanese translation to multilingual.mdx
|
{
"login": "shogohida",
"id": 10365357,
"node_id": "MDQ6VXNlcjEwMzY1MzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/10365357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shogohida",
"html_url": "https://github.com/shogohida",
"followers_url": "https://api.github.com/users/shogohida/followers",
"following_url": "https://api.github.com/users/shogohida/following{/other_user}",
"gists_url": "https://api.github.com/users/shogohida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shogohida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shogohida/subscriptions",
"organizations_url": "https://api.github.com/users/shogohida/orgs",
"repos_url": "https://api.github.com/users/shogohida/repos",
"events_url": "https://api.github.com/users/shogohida/events{/privacy}",
"received_events_url": "https://api.github.com/users/shogohida/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I used the formal form to translate because Japanese is normally written in a formal way in docs. \r\n\r\nI wrote this memo because one of the requirements was to use an informal tone. https://github.com/huggingface/transformers/issues/18413#issue-1325310941",
"@ArthurZucker can you have a look here? Thanks!"
] | 1,673
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Signed-off-by: Shogo Hida <shogo.hida@gmail.com>
# What does this PR do?
Adds Japanese translation to multilingual.mdx
Fixes #18413
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21084/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21084",
"html_url": "https://github.com/huggingface/transformers/pull/21084",
"diff_url": "https://github.com/huggingface/transformers/pull/21084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21084.patch",
"merged_at": 1674032899000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21083
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21083/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21083/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21083/events
|
https://github.com/huggingface/transformers/pull/21083
| 1,528,616,847
|
PR_kwDOCUB6oc5HIogG
| 21,083
|
Optimize inference only mode memory if ipex is used
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger @jianan-gu @yao-matrix please review",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
optimize the memory if used ipex for inference.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Library:
- trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21083/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21083",
"html_url": "https://github.com/huggingface/transformers/pull/21083",
"diff_url": "https://github.com/huggingface/transformers/pull/21083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21083.patch",
"merged_at": 1673514078000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21082
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21082/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21082/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21082/events
|
https://github.com/huggingface/transformers/pull/21082
| 1,528,574,665
|
PR_kwDOCUB6oc5HIfgO
| 21,082
|
Corrected a spelling mistake in CODE_OF_CONDUCT.md
|
{
"login": "izam-mohammed",
"id": 106471909,
"node_id": "U_kgDOBlih5Q",
"avatar_url": "https://avatars.githubusercontent.com/u/106471909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/izam-mohammed",
"html_url": "https://github.com/izam-mohammed",
"followers_url": "https://api.github.com/users/izam-mohammed/followers",
"following_url": "https://api.github.com/users/izam-mohammed/following{/other_user}",
"gists_url": "https://api.github.com/users/izam-mohammed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/izam-mohammed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izam-mohammed/subscriptions",
"organizations_url": "https://api.github.com/users/izam-mohammed/orgs",
"repos_url": "https://api.github.com/users/izam-mohammed/repos",
"events_url": "https://api.github.com/users/izam-mohammed/events{/privacy}",
"received_events_url": "https://api.github.com/users/izam-mohammed/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ok π"
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
# What does this PR do?
Corrected an English grammar mistake
The noun phrase representative seems to be missing a determiner before it. Consider adding an article like 'the' or ' a'.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21082/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21082",
"html_url": "https://github.com/huggingface/transformers/pull/21082",
"diff_url": "https://github.com/huggingface/transformers/pull/21082.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21082.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21081
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21081/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21081/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21081/events
|
https://github.com/huggingface/transformers/issues/21081
| 1,528,393,257
|
I_kwDOCUB6oc5bGW4p
| 21,081
|
Could swin-tiny-patch4-window7-224 be traced by using torch.jit.trace?
|
{
"login": "heylamourding",
"id": 30859212,
"node_id": "MDQ6VXNlcjMwODU5MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/30859212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heylamourding",
"html_url": "https://github.com/heylamourding",
"followers_url": "https://api.github.com/users/heylamourding/followers",
"following_url": "https://api.github.com/users/heylamourding/following{/other_user}",
"gists_url": "https://api.github.com/users/heylamourding/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heylamourding/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heylamourding/subscriptions",
"organizations_url": "https://api.github.com/users/heylamourding/orgs",
"repos_url": "https://api.github.com/users/heylamourding/repos",
"events_url": "https://api.github.com/users/heylamourding/events{/privacy}",
"received_events_url": "https://api.github.com/users/heylamourding/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @NielsRogge ",
"Normally it should work, see https://github.com/huggingface/transformers/issues/17476 for details",
"Hi @NielsRogge , thanks. I tried the approach mentioned in https://github.com/huggingface/transformers/issues/17476. It works. \r\n\r\n```\r\nfrom transformers import SwinModel, SwinConfig\r\nimport types\r\nmodel = SwinModel.from_pretrained(\"microsoft/swin-tiny-patch4-window7-224\")\r\nmodel.eval()\r\nif not hasattr(model, 'forward_'): model.forward_ = model.forward\r\nmodel.forward = types.MethodType(lambda self,x: self.forward_(x).last_hidden_state, model)\r\nx = torch.randn(1,3,224,224)\r\ntraced = torch.jit.trace(model, x, check_trace = False) \r\n```\r\n\r\nHowever, I tried to convert traced model into neuron format and deploied to inferentia. \r\n I followed the [tutorial](https://huggingface.co/docs/transformers/main/en/torchscript) and ran code below: \r\n```\r\ntorch.neuron.trace(model, x, strict = False)\r\n```\r\nIt showed error below. **_May I know is SwinT convertible in neuron format?_** \r\n```\r\nINFO:Neuron:There are 23 ops of 3 different types in the TorchScript that are not compiled by neuron-cc: aten::adaptive_avg_pool1d, aten::index, aten::roll, (For more information see https://github.com/aws/aws-neuron-sdk/blob/master/release-notes/neuron-cc-ops/neuron-cc-ops-pytorch.md)\r\nINFO:Neuron:Number of arithmetic operators (pre-compilation) before = 1813, fused = 1135, percent fused = 62.6%\r\nWARNING:Neuron:torch.neuron.trace failed on _NeuronGraph$2347; falling back to native python function call\r\nERROR:Neuron:torch.jit.trace error. The PyTorch-Neuron trace Python API uses the\r\ntorch.jit.trace function in PyTorch to generate ScriptFunction models for execution\r\non Inferentia. Due to this, your exisiting PyTorch model must be torch jit traceable.\r\n```\r\n ",
"Hello @heylamourding,\r\n\r\nit seems like that Inferentia1 (`neuron-sdk`) is missing support for some operators for the `SWIN` model, that's not a transformers issue more a neuron-sdk issue. \r\ni saw you already opened an issue in the `neuron-sdk` repository: https://github.com/aws-neuron/aws-neuron-sdk/issues/626\r\n\r\n",
"Hi @philschmid, thanks for your reply! Sorry for creating inappropriate issue here. I will close the issue."
] | 1,673
| 1,674
| 1,674
|
NONE
| null |
### System Info
torch version: 1.12.0+cu102
transformers version: 4.18.0
model: microsoft/swin-tiny-patch4-window7-224
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi team, I want to traced the swin model.
My first try is directly tracing pretrained model.
```
from transformers import SwinModel, SwinConfig
import types
model = SwinModel.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
model.eval()
x = torch.randn(1,3,224,224)
if not hasattr(model, 'forward_'): model.forward_ = model.forward
# change the forward to make it traceable
model.forward = types.MethodType(lambda self,x: self.forward_(x).last_hidden_state, model)
traced = torch.jit.trace(model, x)
```
Error shows
>
> ---------------------------------------------------------------------------
> TracingCheckError Traceback (most recent call last)
> <ipython-input-55-622b7b3243c5> in <module>
> 8 # change the forward to make it traceable
> 9 model.forward = types.MethodType(lambda self,x: self.forward_(x).last_hidden_state, model)
> ---> 10 traced = torch.jit.trace(model, x)
> 11 # try:
> 12 # traced = torch.jit.trace(model, x)
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/jit/_trace.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
> 748
> 749 if isinstance(func, torch.nn.Module):
> --> 750 return trace_module(
> 751 func,
> 752 {"forward": example_inputs},
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/jit/_trace.py in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
> 990 )
> 991 else:
> --> 992 _check_trace(
> 993 [inputs],
> 994 func,
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
> 25 def decorate_context(*args, **kwargs):
> ...
> - %pooler : __torch__.torch.nn.modules.pooling.___torch_mangle_6083.AdaptiveAvgPool1d = prim::GetAttr[name="pooler"](%self.1)
> ? ^ -
> + %pooler : __torch__.torch.nn.modules.pooling.___torch_mangle_6338.AdaptiveAvgPool1d = prim::GetAttr[name="pooler"](%self.1)
> ? ^^
Then, I tried to disable pooling layer by directly declare raw swin structure.
```
configuration = SwinConfig()
configuration.patch_norm = False
model = SwinModel(configuration, add_pooling_layer = False, use_mask_token = False)
model.eval()
x = torch.randn(1,3,224,224)
if not hasattr(model, 'forward_'): model.forward_ = model.forward
# change the forward to make it traceable
model.forward = types.MethodType(lambda self,x: self.forward_(x).last_hidden_state, model)
```
Error shows
>
> ---------------------------------------------------------------------------
> TracingCheckError Traceback (most recent call last)
> <ipython-input-56-c1e3f68ee2a2> in <module>
> 13 # torch.jit.save(traced, f'model/inferentia/trial3_traced.pt')
> 14
> ---> 15 traced = torch.jit.trace(model, x)
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/jit/_trace.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
> 748
> 749 if isinstance(func, torch.nn.Module):
> --> 750 return trace_module(
> 751 func,
> 752 {"forward": example_inputs},
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/jit/_trace.py in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
> 990 )
> 991 else:
> --> 992 _check_trace(
> 993 [inputs],
> 994 func,
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
> 25 def decorate_context(*args, **kwargs):
> 26 with self.clone():
> ---> 27 return func(*args, **kwargs)
> ...
> - %layernorm : __torch__.torch.nn.modules.normalization.___torch_mangle_6592.LayerNorm = prim::GetAttr[name="layernorm"](%self.1)
> ? ^^^
> + %layernorm : __torch__.torch.nn.modules.normalization.___torch_mangle_6846.LayerNorm = prim::GetAttr[name="layernorm"](%self.1)
> ? ^^^
>
May I know is it possible to trace the model?
### Expected behavior
Expect swin model can be traced. The traced model can be tried for aws-neuron. ..
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21081/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21080
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21080/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21080/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21080/events
|
https://github.com/huggingface/transformers/issues/21080
| 1,528,381,336
|
I_kwDOCUB6oc5bGT-Y
| 21,080
|
Batch Decoding in GPT2 with variable length sequences
|
{
"login": "murthyrudra",
"id": 14203368,
"node_id": "MDQ6VXNlcjE0MjAzMzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14203368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/murthyrudra",
"html_url": "https://github.com/murthyrudra",
"followers_url": "https://api.github.com/users/murthyrudra/followers",
"following_url": "https://api.github.com/users/murthyrudra/following{/other_user}",
"gists_url": "https://api.github.com/users/murthyrudra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/murthyrudra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/murthyrudra/subscriptions",
"organizations_url": "https://api.github.com/users/murthyrudra/orgs",
"repos_url": "https://api.github.com/users/murthyrudra/repos",
"events_url": "https://api.github.com/users/murthyrudra/events{/privacy}",
"received_events_url": "https://api.github.com/users/murthyrudra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @ArthurZucker ",
"@younesbelkada related issue that we had closed before: https://github.com/huggingface/transformers/issues/18809",
"Before diving a bit deeper, I don't really understand why are you using `convert_id_to_tokens` instead of juste using the `tokenizer.batch_decode` method? Did you try with it? ",
"> Before diving a bit deeper, I don't really understand why are you using `convert_id_to_tokens` instead of juste using the `tokenizer.batch_decode` method? Did you try with it?\r\n\r\nHi @ArthurZucker , the issues is not with `convert_id_to_tokens` . If we replace this function `convert_id_to_tokens` with `tokenizer.batch_decode` we still get the same issue. \r\n\r\nThe issue being `GPT2` model adds position embeddings to every token in the input sequence including `pad_tokens`. \r\n\r\nConsider the input has `I went to the`. If we use batch size of `1` and no padding is specified, the position id for the word `I` will be `0`. However, if I specify the `max_length` as say `5` in the tokenizer. The tokenizer prepends the input with one pad_token. As a result, the position id for the word `I` will be `1`. This changes the model prediction",
"There seems to indeed be a bug! When I use the `generate()` function, I am getting the correct output : \r\n```python \r\n>>> import torch\r\n>>> from transformers import AutoModelForCausalLM, AutoTokenizer\r\n>>> tokenizer = AutoTokenizer.from_pretrained('gpt2')\r\n>>> tokenizer.pad_token = tokenizer.eos_token\r\n>>> tokenizer.pad_token_id = tokenizer.eos_token_id\r\n>>> tokenizer.padding_side = 'left'\r\n\r\n>>> model = AutoModelForCausalLM.from_pretrained('gpt2', pad_token_id = tokenizer.eos_token_id)\r\n\r\n>>> prompt_text = [ 'I went to the','we are trying to','The purpose of this workshop is to check whether we can']\r\n>>> encodings_dict = tokenizer.batch_encode_plus(prompt_text, max_length=12, pad_to_max_length=True, return_tensors= \"pt\")\r\n>>> input_ids = torch.tensor(encodings_dict['input_ids'])\r\n>>> attn_mask = torch.tensor(encodings_dict['attention_mask'])\r\n>>> tokenizer.batch_decode(model.generate(input_ids, attention_mask=attn_mask, max_length=12))\r\n```\r\n```python\r\n['<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>I went to the hospital',\r\n '<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>we are trying to get',\r\n '<|endoftext|>The purpose of this workshop is to check whether we can make']\r\n```\r\nThe issue lies with the fact that we have to pass the positions ids for gpt2. In the generate function, the positional ids are created on the fly if not passed, which is why we have the correct output. \r\n\r\n```python \r\n if attention_mask is not None and position_ids is None:\r\n # create position_ids on the fly for batch generation\r\n position_ids = attention_mask.long().cumsum(-1) - 1\r\n position_ids.masked_fill_(attention_mask == 0, 1)\r\n if past:\r\n position_ids = position_ids[:, -1].unsqueeze(-1)\r\n``` \r\ncc @LysandreJik I am guessing that the original implementation does not use this? Or is there a specific reason that we are using \r\n```\r\n if position_ids is None:\r\n position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device)\r\n position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])\r\n```\r\nin the model's forward? ",
"Thanks for the great issue @murthyrudra!\r\n\r\nHmmm indeed, might be a bug dating back to the original implementation of `gpt2` within `transformers` (this codes dates back to Feb 2019). It's going to be a bit hard to change this within the code, but we can update the documentation/show pointers regarding how to circumvent this issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,682
| 1,682
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-4.18.0-372.26.1.el8_6.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.11
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hi, I am trying to batch decode using GPT2. Each batch may contain variable sequences with different length. I did try specifying `left` padding and explicitly setting the `pad_token` in GPT2.
Steps to reproduce the error
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
tokenizer = AutoTokenizer.from_pretrained('gpt2')
# run this only for gpt-2 as we do not have a pad token in gpt2
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.padding_side = 'left'
model = AutoModelForCausalLM.from_pretrained('gpt2', pad_token_id = tokenizer.eos_token_id)
model.to(device)
sentence = "I went to the"
results = tokenizer(
[sentence],
add_special_tokens=True,
truncation=True,
padding=True,
return_tensors='pt',
)
print("========= With No Padding ==========")
print("Tokenizing the input sentence \"{0}\" leads to ".format(sentence) )
print(tokenizer.convert_ids_to_tokens( results['input_ids'][0] ))
with torch.no_grad():
logits = model(results['input_ids'].to(device),
attention_mask=results['attention_mask'].to(device),
).logits[:, -1, :]
index = torch.argmax(logits).item()
print( sentence + " " + tokenizer.convert_ids_to_tokens(index) )
print("\n" * 2)
max_length= 30
print("========= Using Padding of size {0} ==========".format(max_length))
results = tokenizer(
[sentence],
add_special_tokens=True,
max_length=max_length,
truncation=False,
padding='max_length',
return_tensors='pt',
)
print("Tokenizing the padded input sentence \"{0}\" leads to ".format(sentence) )
print(tokenizer.convert_ids_to_tokens( results['input_ids'][0] ))
with torch.no_grad():
logits = model(results['input_ids'].to(device),
attention_mask=results['attention_mask'].to(device),
).logits[:, -1, :]
index = torch.argmax(logits).item()
print( sentence + " " + tokenizer.convert_ids_to_tokens(index) )
print("\n" * 2)
```
Output
```
========= With No Padding ==========
Tokenizing the input sentence "I went to the" leads to
['I', 'Δ went', 'Δ to', 'Δ the']
I went to the Δ hospital
========= Using Padding of size 30 ==========
Tokenizing the padded input sentence "I went to the" leads to
['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', 'I', 'Δ went', 'Δ to', 'Δ the']
I went to the Δ the
```
Explicitly, Modifying position embeddings takes care of the above problem.
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
tokenizer = AutoTokenizer.from_pretrained('gpt2')
# run this only for gpt-2 as we do not have a pad token in gpt2
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.padding_side = 'left'
model = AutoModelForCausalLM.from_pretrained('gpt2', pad_token_id = tokenizer.eos_token_id)
model.to(device)
sentence = "I went to the"
results = tokenizer(
[sentence],
add_special_tokens=True,
truncation=True,
padding=True,
return_tensors='pt',
)
position_ids = torch.zeros(results['attention_mask'].size(), dtype=torch.int32)
starting_index = 0
for index in range(results['attention_mask'][0].size(0)):
if results['attention_mask'][0][index] == 1:
position_ids[0][index] = starting_index
starting_index += 1
print("========= With No Padding ==========")
print("Tokenizing the input sentence \"{0}\" leads to ".format(sentence) )
print(tokenizer.convert_ids_to_tokens( results['input_ids'][0] ))
with torch.no_grad():
logits = model(results['input_ids'].to(device),
attention_mask=results['attention_mask'].to(device),
position_ids=position_ids.to(device),
).logits[:, -1, :]
index = torch.argmax(logits).item()
print( sentence + " " + tokenizer.convert_ids_to_tokens(index) )
print("\n" * 2)
max_length= 30
print("========= Using Padding of size {0} ==========".format(max_length))
results = tokenizer(
[sentence],
add_special_tokens=True,
max_length=max_length,
truncation=False,
padding='max_length',
return_tensors='pt',
)
print("Tokenizing the padded input sentence \"{0}\" leads to ".format(sentence) )
print(tokenizer.convert_ids_to_tokens( results['input_ids'][0] ))
position_ids = torch.zeros(results['attention_mask'].size(), dtype=torch.int32)
starting_index = 0
for index in range(results['attention_mask'][0].size(0)):
if results['attention_mask'][0][index] == 1:
position_ids[0][index] = starting_index
starting_index += 1
with torch.no_grad():
logits = model(results['input_ids'].to(device),
attention_mask=results['attention_mask'].to(device),
position_ids=position_ids.to(device),
).logits[:, -1, :]
index = torch.argmax(logits).item()
print( sentence + " " + tokenizer.convert_ids_to_tokens(index) )
print("\n" * 2)
```
The output when position embeddings are explicitly specified:
```
========= With No Padding ==========
Tokenizing the input sentence "I went to the" leads to
['I', 'Δ went', 'Δ to', 'Δ the']
I went to the Δ hospital
========= Using Padding of size 30 ==========
Tokenizing the padded input sentence "I went to the" leads to
['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', 'I', 'Δ went', 'Δ to', 'Δ the']
I went to the Δ hospital
```
Is it possible to have documentation mentioning this?
### Expected behavior
In both scenarios, with and without left padding the model should generate `Δ hospital` as the token with the highest probability. However, without modifying position embedding and when tokens are padded we get `Δ the` as the next token with the highest probability
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21080/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21079
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21079/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21079/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21079/events
|
https://github.com/huggingface/transformers/issues/21079
| 1,528,052,444
|
I_kwDOCUB6oc5bFDrc
| 21,079
|
TokenGT
|
{
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "Raman-Kumar",
"id": 32980600,
"node_id": "MDQ6VXNlcjMyOTgwNjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/32980600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raman-Kumar",
"html_url": "https://github.com/Raman-Kumar",
"followers_url": "https://api.github.com/users/Raman-Kumar/followers",
"following_url": "https://api.github.com/users/Raman-Kumar/following{/other_user}",
"gists_url": "https://api.github.com/users/Raman-Kumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raman-Kumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raman-Kumar/subscriptions",
"organizations_url": "https://api.github.com/users/Raman-Kumar/orgs",
"repos_url": "https://api.github.com/users/Raman-Kumar/repos",
"events_url": "https://api.github.com/users/Raman-Kumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raman-Kumar/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Raman-Kumar",
"id": 32980600,
"node_id": "MDQ6VXNlcjMyOTgwNjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/32980600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raman-Kumar",
"html_url": "https://github.com/Raman-Kumar",
"followers_url": "https://api.github.com/users/Raman-Kumar/followers",
"following_url": "https://api.github.com/users/Raman-Kumar/following{/other_user}",
"gists_url": "https://api.github.com/users/Raman-Kumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raman-Kumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raman-Kumar/subscriptions",
"organizations_url": "https://api.github.com/users/Raman-Kumar/orgs",
"repos_url": "https://api.github.com/users/Raman-Kumar/repos",
"events_url": "https://api.github.com/users/Raman-Kumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raman-Kumar/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@clefourrier for sure will work",
"Thanks for assigning.\r\n@clefourrier π I am still examining and experimenting more...",
"Ping me if you need help! :smile: ",
"π’ giving up fingering out myself \r\nmy level - I was not familiar with transformer architecture, collator etc, and other models like bert \r\nnow I have studied them, and the TokenGT modelβs theoretical aspects.\r\n\r\n\r\nI have downloaded the checkpoint folder form [drive link](https://drive.google.com/drive/folders/1mo0dV-aLxGFWbPF8xfE8phWTmOtIV1HG?usp=sharing) from the [original repo link](https://github.com/jw9730/tokengt) \r\n\r\nNow I have to run **both** PR with checkpoint and original repo \r\n\r\n\r\nCan you share the script you did with Graphormer?\r\n@clefourrier ",
"Ok so you will need to do something similar to this:\r\n\r\n```python\r\nimport argparse\r\nimport os, sys\r\nfrom pathlib import Path\r\n\r\nimport torch\r\nfrom torch import nn\r\nfrom torch.hub import load_state_dict_from_url\r\n\r\n# Here, you need to import the transformers version of the TokenGT code (from the PR) \r\nfrom transformers import (\r\n AutoModel,\r\n GraphormerConfig,\r\n GraphormerForGraphClassification,\r\n GraphormerModel,\r\n # GraphormerCollator\r\n)\r\nfrom transformers.utils import logging\r\nfrom transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator \r\n\r\n# Here, you need to import the original TokenGT code instead of Graphormer\r\nsys.path.append(\"path to Graphormer/\")\r\nimport graphormer\r\nimport graphormer.tasks.graph_prediction\r\nimport graphormer.models.graphormer\r\nfrom graphormer.evaluate.evaluate import convert_namespace_to_omegaconf, tasks, options\r\nfrom fairseq import utils\r\nfrom fairseq.logging import progress_bar\r\n\r\n# You will likely have to change some of these depending on the error messages you get when loading the checkpoint to transformers format\r\nrename_keys = [\r\n (\"encoder.lm_output_learned_bias\", \"classifier.lm_output_learned_bias\"),\r\n (\"encoder.embed_out.weight\", \"classifier.classifier.weight\"),\r\n #(\"encoder.embed_out.weight\", \"classifier.embed_out.weight\"),\r\n #(\"encoder.embed_out.bias\", \"classifier.embed_out.bias\"),\r\n]\r\n\r\ndef remove_ignore_keys_(state_dict):\r\n ignore_keys = [\r\n \"encoder.version\",\r\n \"decoder.version\",\r\n \"encoder.masked_lm_pooler.bias\", # to check\r\n \"encoder.masked_lm_pooler.weight\", # to check\r\n \"_float_tensor\",\r\n ]\r\n for k in ignore_keys:\r\n state_dict.pop(k, None)\r\n\r\n\r\ndef rename_key(dct, old, new):\r\n val = dct.pop(old)\r\n dct[new] = val\r\n\r\n\r\ndef make_linear_from_emb(emb):\r\n vocab_size, emb_size = emb.weight.shape\r\n lin_layer = nn.Linear(vocab_size, emb_size, bias=False)\r\n lin_layer.weight.data = emb.weight.data\r\n return lin_layer\r\n\r\n\r\n# In this section, you need to replace calls to Graphormer by calls to TokenGT models. \r\n# Graphormer model gets replaced by the original TokenGT model\r\n# Transformers model gets replaced by the format in Transformers \r\n@torch.no_grad()\r\ndef convert_graphormer_checkpoint(\r\n args, checkpoint_name, pytorch_dump_folder_path\r\n):\r\n pytorch_dump_folder_path = f\"{pytorch_dump_folder_path}/{checkpoint_name}\" \r\n cfg = convert_namespace_to_omegaconf(args)\r\n task = tasks.setup_task(cfg.task)\r\n\r\n # Graphormer model\r\n graphormer_model = task.build_model(cfg.model)\r\n graphormer_state = torch.load(checkpoint_name)[\"model\"]\r\n graphormer_model.load_state_dict(graphormer_state, strict=True, model_cfg=cfg.model)\r\n graphormer_model.upgrade_state_dict(graphormer_model.state_dict())\r\n\r\n\r\n # Transformers model\r\n config = GraphormerConfig(\r\n num_labels=1,\r\n share_input_output_embed=False,\r\n num_layers=12,\r\n embedding_dim=768,\r\n ffn_embedding_dim=768,\r\n num_attention_heads=32,\r\n dropout=0.0,\r\n attention_dropout=0.1,\r\n activation_dropout=0.1,\r\n encoder_normalize_before=True,\r\n pre_layernorm=False,\r\n apply_graphormer_init=True,\r\n activation_fn=\"gelu\",\r\n no_token_positional_embeddings=False,\r\n )\r\n transformers_model = GraphormerForGraphClassification(config)\r\n\r\n # We copy the state dictionary from the original model to our format \r\n state_dict = graphormer_model.state_dict()\r\n remove_ignore_keys_(state_dict)\r\n for src, dest in rename_keys:\r\n rename_key(state_dict, src, dest)\r\n transformers_model.load_state_dict(state_dict)\r\n\r\n # Check results\r\n graphormer_model.eval()\r\n transformers_model.eval()\r\n\r\n split = args.split\r\n task.load_dataset(split)\r\n batch_iterator = task.get_batch_iterator(\r\n dataset=task.dataset(split),\r\n max_tokens=cfg.dataset.max_tokens_valid,\r\n max_sentences=2, #cfg.dataset.batch_size_valid,\r\n max_positions=utils.resolve_max_positions(\r\n task.max_positions(),\r\n graphormer_model.max_positions(),\r\n ),\r\n ignore_invalid_inputs=cfg.dataset.skip_invalid_size_inputs_valid_test,\r\n required_batch_size_multiple=cfg.dataset.required_batch_size_multiple,\r\n seed=cfg.common.seed,\r\n num_workers=cfg.dataset.num_workers,\r\n epoch=0,\r\n data_buffer_size=cfg.dataset.data_buffer_size,\r\n disable_iterator_cache=False,\r\n )\r\n itr = batch_iterator.next_epoch_itr(\r\n shuffle=False, set_dataset_epoch=False\r\n )\r\n progress = progress_bar.progress_bar(\r\n itr,\r\n log_format=cfg.common.log_format,\r\n log_interval=cfg.common.log_interval,\r\n default_log_format=(\"tqdm\" if not cfg.common.no_progress_bar else \"simple\")\r\n )\r\n\r\n # Inference\r\n collator = GraphormerDataCollator() #on_the_fly_processing=True)\r\n ys_graphormer = []\r\n ys_transformers = []\r\n with torch.no_grad():\r\n for i, sample in enumerate(progress):\r\n y_graphormer = graphormer_model(**sample[\"net_input\"])[:, 0, :].reshape(-1)\r\n ys_graphormer.extend(y_graphormer.detach())\r\n #print(sample[\"net_input\"][\"batched_data\"])\r\n transformer_sample = sample[\"net_input\"][\"batched_data\"] # data is already collated - collator(sample[\"net_input\"][\"batched_data\"])\r\n transformer_sample.pop(\"idx\")\r\n transformer_sample[\"labels\"] = transformer_sample.pop(\"y\")\r\n transformer_sample[\"node_input\"] = transformer_sample.pop(\"x\")\r\n torch.set_printoptions(profile=\"full\")\r\n y_transformer = transformers_model(**transformer_sample)[\"logits\"] #[:, 0, :].reshape(-1)\r\n ys_transformers.extend(y_transformer.detach())\r\n\r\n ys_graphormer = torch.stack(ys_graphormer)\r\n ys_transformers = torch.stack(ys_transformers).squeeze(-1)\r\n\r\n assert ys_graphormer.shape == ys_transformers.shape\r\n assert (ys_graphormer == ys_transformers).all().item()\r\n\r\n print(\"All good :)\")\r\n\r\n Path(pytorch_dump_folder_path).mkdir(exist_ok=True)\r\n transformers_model.save_pretrained(pytorch_dump_folder_path)\r\n transformers_model.push_to_hub(checkpoint_name, use_auth_token=\"replace by your token\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n parser = options.get_training_parser()\r\n # Required parameters\r\n parser.add_argument(\r\n \"--checkpoint_name\",\r\n type=str,\r\n help=\"name of a model to load\", # path to a model.pt on local filesystem.\"\r\n )\r\n parser.add_argument(\r\n \"--pytorch_dump_folder_path\",\r\n default=None,\r\n type=str,\r\n help=\"Path to the output PyTorch model.\",\r\n )\r\n\r\n parser.add_argument(\r\n \"--split\",\r\n type=str,\r\n )\r\n parser.add_argument(\r\n \"--metric\",\r\n type=str,\r\n )\r\n\r\n\r\n args = options.parse_args_and_arch(parser, modify_parser=None)\r\n print(args)\r\n\r\n #args = parser.parse_args()\r\n convert_graphormer_checkpoint(\r\n args,\r\n args.checkpoint_name,\r\n args.pytorch_dump_folder_path,\r\n )\r\n```",
"new to deep learning\r\nI am using macbook air m1 \r\nWhile running command `pip install -e \".[dev]\"` for transformers repo, \r\nIt shows some error for tensorflow \r\nSo, I am using `pip install -e \".[dev-torch]\"`, which works fine.\r\n\r\nwhat argument list do you supply when running the above script for Graphormer? @clefourrier ",
"Hi @Raman-Kumar!\r\nI don't think the tensorflow error is very important atm, don't worry :smile: \r\n\r\nHere is my argument list: `--checkpoint_name Name_of_the_checkpoint_you_downloaded_for_tokenGT --pytorch_dump_folder_path tmp --user-dir \"Directory where you cloned the code from the TokenGT repository\" --num-workers 16 --ddp-backend=legacy_ddp --dataset-name MUTAG_0 --user-data-dir \"custom_datasets\" --task graph_prediction --criterion l1_loss --arch graphormer_base --num-classes 1 --batch-size 64 --pretrained-model-name pcqm4mv1_graphormer_base --load-pretrained-model-output-layer --split valid --seed 1`\r\n \r\nFrom `ddp-backend` on, you will need to adapt the parameters to launch one of the available datasets in TokenGT, or you could add a `custom_datasets` loader in `tokengt/data/predict_custom`. \r\n\r\nFor the latter, I think there is a sample script, but if not you can take inspiration from this, which loads MUTAG from the hub to load it in TokenGT:\r\n\r\n ```python\r\n from datasets import load_dataset\r\n\r\nfrom tokengt.data import register_dataset\r\nfrom tokengt.data.pyg_datasets.pyg_dataset import TokenGTPYGDataset\r\n\r\nimport torch\r\nfrom torch_geometric.data import Data, Dataset, InMemoryDataset\r\n\r\nimport numpy as np\r\n\r\n\r\nclass TmpDataset(InMemoryDataset):\r\n def __init__(self, root, data_list):\r\n self.data_list = data_list\r\n super().__init__(root, None, None, None)\r\n\r\n @property\r\n def raw_file_names(self):\r\n return []\r\n\r\n @property\r\n def processed_file_names(self):\r\n return [\"data.pt\"]\r\n\r\n def len(self):\r\n return len(self.data_list)\r\n\r\n def get(self, idx):\r\n data = self.data_list[idx]\r\n return data\r\n\r\ndef create_customized_dataset(dataset_name, ix_xval):\r\n graphs_dataset = load_dataset(f\"graphs-datasets/{dataset_name}\")\r\n graphs_dataset = graphs_dataset.shuffle(0)\r\n\r\n key = \"full\" if \"full\" in graphs_dataset.keys() else \"train\"\r\n\r\n graphs_list = [\r\n Data(\r\n **{\r\n \"edge_index\": torch.tensor(graph[\"edge_index\"], dtype=torch.long),\r\n \"y\": torch.tensor(graph[\"y\"], dtype=torch.long),\r\n \"num_nodes\": graph[\"num_nodes\"],\r\n #\"x\": torch.ones(graph[\"num_nodes\"], 1, dtype=torch.long), # same embedding for all\r\n #\"edge_attr\": torch.ones(len(graph[\"edge_index\"][0]), 1, dtype=torch.long), # same embedding for all\r\n \"x\": torch.tensor(graph[\"node_feat\"], dtype=torch.long) if \"node_feat\" in graph.keys() else torch.ones(graph[\"num_nodes\"], 1, dtype=torch.long), # same embedding for all\r\n \"edge_attr\": torch.tensor(graph[\"edge_attr\"], dtype=torch.long) if \"edge_attr\" in graph.keys() else torch.ones(len(graph[\"edge_index\"][0]), 1, dtype=torch.long), # same embedding for all\r\n }\r\n )\r\n for graph in graphs_dataset[key]\r\n ]\r\n\r\n len_dataset = len(graphs_dataset[key])\r\n len_xval_batch = int(len_dataset / 10)\r\n cur_val_range_int = list(range(ix_xval * len_xval_batch, (ix_xval + 1) * len_xval_batch))\r\n cur_val_range = np.array(cur_val_range_int, dtype=np.int64)\r\n cur_train_range = np.array(\r\n [ix for ix in range(len_dataset) if ix not in cur_val_range_int], dtype=np.int64\r\n )\r\n\r\n dataset = TmpDataset(\"\", graphs_list)\r\n\r\n return {\r\n \"dataset\": TokenGTPYGDataset(\r\n dataset=dataset,\r\n seed=0,\r\n train_idx=torch.tensor([0]), #cur_train_range),\r\n valid_idx=torch.tensor(cur_val_range),\r\n test_idx=torch.tensor(cur_val_range),\r\n ), \r\n \"source\": \"pyg\",\r\n \"train_idx\":torch.tensor(cur_train_range),\r\n \"valid_idx\":torch.tensor(cur_val_range),\r\n \"test_idx\":torch.tensor(cur_val_range),\r\n }\r\n\r\n\r\n @register_dataset(\"MUTAG_0\")\r\n def m0():\r\n return create_customized_dataset(\"MUTAG\", 0)\r\n ``` \r\n\r\nTell me if anything is unclear! :hugs: ",
"Right now I am running this script \r\n\r\nscript.py\r\n```\r\nimport argparse\r\nimport os, sys\r\nfrom pathlib import Path\r\n\r\nimport torch\r\nfrom torch import nn\r\nfrom torch.hub import load_state_dict_from_url\r\nfrom transformers.utils import logging\r\n\r\nimport tokengt\r\nimport tokengt.tasks.graph_prediction \r\nimport tokengt.models.tokengt\r\nfrom tokengt.evaluate.evaluate import convert_namespace_to_omegaconf, tasks, options\r\n\r\nfrom fairseq import utils\r\nfrom fairseq.logging import progress_bar\r\n\r\n@torch.no_grad()\r\ndef convert_tokengt_checkpoint(\r\n args, checkpoint_name, pytorch_dump_folder_path\r\n ):\r\n pytorch_dump_folder_path = f\"{pytorch_dump_folder_path}/{checkpoint_name}\" \r\n cfg = convert_namespace_to_omegaconf(args)\r\n # task = tasks.setup_task(cfg.task)\r\n\r\nif __name__ == \"__main__\":\r\n parser = options.get_training_parser()\r\n # Required parameters\r\n parser.add_argument(\r\n \"--checkpoint_name\",\r\n type=str,\r\n help=\"name of a model to load\", # path to a model.pt on local filesystem.\"\r\n )\r\n parser.add_argument(\r\n \"--pytorch_dump_folder_path\",\r\n default=None,\r\n type=str,\r\n help=\"Path to the output PyTorch model.\",\r\n )\r\n\r\n parser.add_argument(\r\n \"--split\",\r\n type=str,\r\n )\r\n parser.add_argument(\r\n \"--metric\",\r\n type=str,\r\n )\r\n\r\n\r\n args = options.parse_args_and_arch(parser, modify_parser=None)\r\n print(args.pytorch_dump_folder_path)\r\n\r\n args = parser.parse_args()\r\n convert_tokengt_checkpoint(\r\n args,\r\n args.checkpoint_name,\r\n args.pytorch_dump_folder_path,\r\n )\r\n```\r\nwith command \r\n` .....script.py --checkpoint_name pcqv2-tokengt-orf64-trained --pytorch_dump_folder_path tmp --user-dir \"../tokengt\" --num-workers 16 --ddp-backend=legacy_ddp --dataset-name PCQM4Mv2 --user-data-dir \"tokengt/data/ogb_datasets\" --task graph_prediction --criterion l1_loss --arch tokengt_base --num-classes 1 --batch-size 64 --pretrained-model-name mytokengt --load-pretrained-model-output-layer --split valid --seed 1`\r\n\r\nin `cfg = convert_namespace_to_omegaconf(args)`\r\n\r\nI am getting this error\r\n```\r\n2023-02-09 13:05:21 | ERROR | fairseq.dataclass.utils | Error when composing. Overrides: ['common.no_progress_bar=False', 'common.log_interval=100', 'common.log_format=null', 'common.log_file=null', 'common.aim_repo=null', 'common.aim_run_hash=null', 'common.tensorboard_logdir=null', 'common.wandb_project=null', 'common.azureml_logging=False', 'common.seed=1', 'common.cpu=False', 'common.tpu=False', 'common.bf16=False', 'common.memory_efficient_bf16=False', 'common.fp16=False', 'common.memory_efficient_fp16=False', 'common.fp16_no_flatten_grads=False', 'common.fp16_init_scale=128', 'common.fp16_scale_window=null', 'common.fp16_scale_tolerance=0.0', 'common.on_cpu_convert_precision=False', 'common.min_loss_scale=0.0001', 'common.threshold_loss_scale=null', 'common.amp=False', 'common.amp_batch_retries=2', 'common.amp_init_scale=128', 'common.amp_scale_window=null', \"common.user_dir='../tokengt'\", 'common.empty_cache_freq=0', 'common.all_gather_list_size=16384', 'common.model_parallel_size=1', 'common.quantization_config_path=null', 'common.profile=False', 'common.reset_logging=False', 'common.suppress_crashes=False', 'common.use_plasma_view=False', \"common.plasma_path='/tmp/plasma'\", 'common_eval.path=null', 'common_eval.post_process=null', 'common_eval.quiet=False', \"common_eval.model_overrides='{}'\", 'common_eval.results_path=null', 'distributed_training.distributed_world_size=1', 'distributed_training.distributed_num_procs=1', 'distributed_training.distributed_rank=0', \"distributed_training.distributed_backend='nccl'\", 'distributed_training.distributed_init_method=null', 'distributed_training.distributed_port=-1', 'distributed_training.device_id=0', 'distributed_training.distributed_no_spawn=False', \"distributed_training.ddp_backend='legacy_ddp'\", \"distributed_training.ddp_comm_hook='none'\", 'distributed_training.bucket_cap_mb=25', 'distributed_training.fix_batches_to_gpus=False', 'distributed_training.find_unused_parameters=False', 'distributed_training.gradient_as_bucket_view=False', 'distributed_training.fast_stat_sync=False', 'distributed_training.heartbeat_timeout=-1', 'distributed_training.broadcast_buffers=False', 'distributed_training.slowmo_momentum=null', \"distributed_training.slowmo_base_algorithm='localsgd'\", 'distributed_training.localsgd_frequency=3', 'distributed_training.nprocs_per_node=1', 'distributed_training.pipeline_model_parallel=False', 'distributed_training.pipeline_balance=null', 'distributed_training.pipeline_devices=null', 'distributed_training.pipeline_chunks=0', 'distributed_training.pipeline_encoder_balance=null', 'distributed_training.pipeline_encoder_devices=null', 'distributed_training.pipeline_decoder_balance=null', 'distributed_training.pipeline_decoder_devices=null', \"distributed_training.pipeline_checkpoint='never'\", \"distributed_training.zero_sharding='none'\", 'distributed_training.fp16=False', 'distributed_training.memory_efficient_fp16=False', 'distributed_training.tpu=False', 'distributed_training.no_reshard_after_forward=False', 'distributed_training.fp32_reduce_scatter=False', 'distributed_training.cpu_offload=False', 'distributed_training.use_sharded_state=False', 'distributed_training.not_fsdp_flatten_parameters=False', 'dataset.num_workers=16', 'dataset.skip_invalid_size_inputs_valid_test=False', 'dataset.max_tokens=null', 'dataset.batch_size=64', 'dataset.required_batch_size_multiple=8', 'dataset.required_seq_len_multiple=1', 'dataset.dataset_impl=null', 'dataset.data_buffer_size=10', \"dataset.train_subset='train'\", \"dataset.valid_subset='valid'\", 'dataset.combine_valid_subsets=null', 'dataset.ignore_unused_valid_subsets=False', 'dataset.validate_interval=1', 'dataset.validate_interval_updates=0', 'dataset.validate_after_updates=0', 'dataset.fixed_validation_seed=null', 'dataset.disable_validation=False', 'dataset.max_tokens_valid=null', 'dataset.batch_size_valid=null', 'dataset.max_valid_steps=null', 'dataset.curriculum=0', \"dataset.gen_subset='test'\", 'dataset.num_shards=1', 'dataset.shard_id=0', 'dataset.grouped_shuffling=False', 'dataset.update_epoch_batch_itr=null', 'dataset.update_ordered_indices_seed=False', 'optimization.max_epoch=0', 'optimization.max_update=0', 'optimization.stop_time_hours=0.0', 'optimization.clip_norm=0.0', 'optimization.sentence_avg=False', 'optimization.update_freq=[1]', 'optimization.lr=[0.25]', 'optimization.stop_min_lr=-1.0', 'optimization.use_bmuf=False', 'optimization.skip_remainder_batch=False', \"checkpoint.save_dir='checkpoints'\", \"checkpoint.restore_file='checkpoint_last.pt'\", 'checkpoint.continue_once=null', 'checkpoint.finetune_from_model=null', 'checkpoint.reset_dataloader=False', 'checkpoint.reset_lr_scheduler=False', 'checkpoint.reset_meters=False', 'checkpoint.reset_optimizer=False', \"checkpoint.optimizer_overrides='{}'\", 'checkpoint.save_interval=1', 'checkpoint.save_interval_updates=0', 'checkpoint.keep_interval_updates=-1', 'checkpoint.keep_interval_updates_pattern=-1', 'checkpoint.keep_last_epochs=-1', 'checkpoint.keep_best_checkpoints=-1', 'checkpoint.no_save=False', 'checkpoint.no_epoch_checkpoints=False', 'checkpoint.no_last_checkpoints=False', 'checkpoint.no_save_optimizer_state=False', \"checkpoint.best_checkpoint_metric='loss'\", 'checkpoint.maximize_best_checkpoint_metric=False', 'checkpoint.patience=-1', \"checkpoint.checkpoint_suffix=''\", 'checkpoint.checkpoint_shard_count=1', 'checkpoint.load_checkpoint_on_all_dp_ranks=False', 'checkpoint.write_checkpoints_asynchronously=False', 'checkpoint.model_parallel_size=1', 'bmuf.block_lr=1.0', 'bmuf.block_momentum=0.875', 'bmuf.global_sync_iter=50', 'bmuf.warmup_iterations=500', 'bmuf.use_nbm=False', 'bmuf.average_sync=False', 'bmuf.distributed_world_size=1', 'generation.beam=5', 'generation.nbest=1', 'generation.max_len_a=0.0', 'generation.max_len_b=200', 'generation.min_len=1', 'generation.match_source_len=False', 'generation.unnormalized=False', 'generation.no_early_stop=False', 'generation.no_beamable_mm=False', 'generation.lenpen=1.0', 'generation.unkpen=0.0', 'generation.replace_unk=null', 'generation.sacrebleu=False', 'generation.score_reference=False', 'generation.prefix_size=0', 'generation.no_repeat_ngram_size=0', 'generation.sampling=False', 'generation.sampling_topk=-1', 'generation.sampling_topp=-1.0', 'generation.constraints=null', 'generation.temperature=1.0', 'generation.diverse_beam_groups=-1', 'generation.diverse_beam_strength=0.5', 'generation.diversity_rate=-1.0', 'generation.print_alignment=null', 'generation.print_step=False', 'generation.lm_path=null', 'generation.lm_weight=0.0', 'generation.iter_decode_eos_penalty=0.0', 'generation.iter_decode_max_iter=10', 'generation.iter_decode_force_max_iter=False', 'generation.iter_decode_with_beam=1', 'generation.iter_decode_with_external_reranker=False', 'generation.retain_iter_history=False', 'generation.retain_dropout=False', 'generation.retain_dropout_modules=null', 'generation.decoding_format=null', 'generation.no_seed_provided=False', 'generation.eos_token=null', 'eval_lm.output_word_probs=False', 'eval_lm.output_word_stats=False', 'eval_lm.context_window=0', 'eval_lm.softmax_batch=9223372036854775807', 'interactive.buffer_size=0', \"interactive.input='-'\", 'ema.store_ema=False', 'ema.ema_decay=0.9999', 'ema.ema_start_update=0', 'ema.ema_seed_model=null', 'ema.ema_update_freq=1', 'ema.ema_fp32=False', 'task=graph_prediction', 'task._name=graph_prediction', \"task.dataset_name='PCQM4Mv2'\", 'task.num_classes=1', 'task.max_nodes=128', \"task.dataset_source='pyg'\", 'task.num_atoms=4608', 'task.num_edges=1536', 'task.num_in_degree=512', 'task.num_out_degree=512', 'task.num_spatial=512', 'task.num_edge_dis=128', 'task.multi_hop_max_dist=5', 'task.spatial_pos_max=1024', \"task.edge_type='multi_hop'\", 'task.seed=1', \"task.pretrained_model_name='mytokengt'\", 'task.load_pretrained_model_output_layer=True', 'task.train_epoch_shuffle=True', \"task.user_data_dir='tokengt/data/ogb_datasets'\", 'criterion=l1_loss', 'criterion._name=l1_loss', 'lr_scheduler=fixed', 'lr_scheduler._name=fixed', 'lr_scheduler.force_anneal=null', 'lr_scheduler.lr_shrink=0.1', 'lr_scheduler.warmup_updates=0', 'lr_scheduler.lr=[0.25]', 'scoring=bleu', 'scoring._name=bleu', 'scoring.pad=1', 'scoring.eos=2', 'scoring.unk=3']\r\nTraceback (most recent call last):\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/_internal/config_loader_impl.py\", line 513, in _apply_overrides_to_config\r\n OmegaConf.update(cfg, key, value, merge=True)\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/omegaconf.py\", line 613, in update\r\n root.__setattr__(last_key, value)\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/dictconfig.py\", line 285, in __setattr__\r\n raise e\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/dictconfig.py\", line 282, in __setattr__\r\n self.__set_impl(key, value)\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/dictconfig.py\", line 266, in __set_impl\r\n self._set_item_impl(key, value)\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/basecontainer.py\", line 398, in _set_item_impl\r\n self._validate_set(key, value)\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/dictconfig.py\", line 143, in _validate_set\r\n self._validate_set_merge_impl(key, value, is_assign=True)\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/dictconfig.py\", line 156, in _validate_set_merge_impl\r\n self._format_and_raise(\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/base.py\", line 95, in _format_and_raise\r\n format_and_raise(\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/_utils.py\", line 694, in format_and_raise\r\n _raise(ex, cause)\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/_utils.py\", line 610, in _raise\r\n raise ex # set end OC_CAUSE=1 for full backtrace\r\nomegaconf.errors.ValidationError: child 'dataset.update_epoch_batch_itr' is not Optional\r\n full_key: dataset.update_epoch_batch_itr\r\n reference_type=DatasetConfig\r\n object_type=DatasetConfig\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/ramankumar/OpenSource/script.py\", line 106, in <module>\r\n convert_graphormer_checkpoint(\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/Users/ramankumar/OpenSource/script.py\", line 74, in convert_graphormer_checkpoint\r\n cfg = convert_namespace_to_omegaconf(args)\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/fairseq/dataclass/utils.py\", line 399, in convert_namespace_to_omegaconf\r\n composed_cfg = compose(\"config\", overrides=overrides, strict=False)\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/experimental/compose.py\", line 31, in compose\r\n cfg = gh.hydra.compose_config(\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/_internal/hydra.py\", line 507, in compose_config\r\n cfg = self.config_loader.load_configuration(\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/_internal/config_loader_impl.py\", line 151, in load_configuration\r\n return self._load_configuration(\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/_internal/config_loader_impl.py\", line 277, in _load_configuration\r\n ConfigLoaderImpl._apply_overrides_to_config(config_overrides, cfg)\r\n File \"/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/_internal/config_loader_impl.py\", line 520, in _apply_overrides_to_config\r\n raise ConfigCompositionException(\r\nhydra.errors.ConfigCompositionException: Error merging override dataset.update_epoch_batch_itr=null\r\n\r\n```\r\n\r\nchild 'dataset.update_epoch_batch_itr' is not Optional ??\r\n@clefourrier ",
"I think you read the error correctly, apparently for TokenGT+fairseq it does not seem to be. \r\n\r\nYou could try passing it as `False` (I think it's a boolean), or looking for it either in the loading scripts or config files to see how it is managed for the project.",
"Once again explain how to supply datasets in an argument \r\n\r\nI created a file `predict_custom.py` alongside (in same folder) conversion `script.py` and pasted all code you gave \r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\n....\r\nclass TmpDataset(InMemoryDataset):\r\n....\r\n\r\ndef create_customized_dataset(dataset_name, ix_xval):\r\n....\r\n @register_dataset(\"MUTAG_0\")\r\n def m0():\r\n return create_customized_dataset(\"MUTAG\", 0)\r\n```\r\n\r\n--dataset-name --MUTAG_0 --user-data-dir \"/tokengt/data/ogb_datasets\"\r\nHow I should write here? @clefourrier ",
"The simplest would be to do what you did initially, and use one of the native datasets for TokenGT with `--dataset-name PCQM4Mv2`. \r\nIf you want to use custom datasets, your `--user-data-dir` must point to the folder containing your dataset script if I remember well.",
"π Got familiar with PyTorch geometric and Graph Neural Network Project\r\nI read about parameters and datasets for Graph from [Graphormer](https://github.com/microsoft/Graphormer)/[docs](https://github.com/microsoft/Graphormer/tree/main/docs) \r\n\r\nhere at [tokengt](https://github.com/jw9730/tokengt)/[large-scale-regression](https://github.com/jw9730/tokengt/tree/main/large-scale-regression)/[scripts](https://github.com/jw9730/tokengt/tree/main/large-scale-regression/scripts) was training script for tokengt using` fairseq-train` with argument list \r\n\r\nInitially, I assumed that those argument list only used with `fairseq-train` but (!No) same applies to conversion script as well (I did not try this.π so sad!! )\r\n\r\nNow everything works fine. yay π\r\n\r\n",
"Congratulations, that's very cool! :hugs: \r\n\r\nDo you know what your next steps are?",
"Next \r\nI added some import-related code in transformers folder like `src/transformers/__init__.py `and other files (taking the help of Graphormer PR )\r\n\r\nafter that I was successfully able to import HFπ€tokegt in my conversion script.py\r\n```\r\nfrom transformers import (\r\n AutoModel,\r\n TokenGTConfig,\r\n TokenGTForGraphClassification,\r\n)\r\n```\r\n\r\n```\r\n tokengt_model = task.build_model(cfg.model)\r\n tokengt_state = torch.load(checkpoint_name)[\"model\"]\r\n tokengt_model.load_state_dict(tokengt_state, strict=True, model_cfg=cfg.model)\r\n tokengt_model.upgrade_state_dict(tokengt_model.state_dict())\r\n # upto these lines works fine no error \r\n\r\n\r\n# Transformers model\r\n config = TokenGTConfig(\r\n tasks_weights=None, # added this \r\n num_labels=1,\r\n share_input_output_embed=False,\r\n num_layers=12,\r\n embedding_dim=768,\r\n ffn_embedding_dim=768,\r\n num_attention_heads=32,\r\n dropout=0.0,\r\n attention_dropout=0.1,\r\n activation_dropout=0.1,\r\n encoder_normalize_before=True,\r\n pre_layernorm=False,\r\n apply_graphormer_init=True,\r\n activation_fn=\"gelu\",\r\n no_token_positional_embeddings=False,\r\n )\r\n transformers_model = TokenGTForGraphClassification(config)\r\n state_dict = tokengt_model.state_dict()\r\n\r\n transformers_model.load_state_dict(state_dict) # here shows me following error\r\n```\r\n\r\n```\r\n raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\nRuntimeError: Error(s) in loading state_dict for TokenGTForGraphClassification:\r\n Missing key(s) in state_dict: \"decoder.lm_output_learned_bias\", \"decoder.embed_out.weight\". \r\n Unexpected key(s) in state_dict: \"encoder.lm_output_learned_bias\", \"encoder.embed_out.weight\", \"encoder.graph_encoder.final_layer_norm.weight\", \"encoder.graph_encoder.final_layer_norm.bias\", \"encoder.graph_encoder.graph_feature.orf_encoder.weight\", \"encoder.graph_encoder.graph_feature.order_encoder.weight\". \r\n size mismatch for encoder.graph_encoder.graph_feature.edge_encoder.weight: copying a param with shape torch.Size([1536, 768]) from checkpoint, the shape in current model is torch.Size([2048, 768]).\r\n```\r\n\r\nthere are two checkpoints lap16, orf64.\r\nBoth gives same error \r\nexcept \r\n\"encoder.graph_encoder.graph_feature.**lap**_encoder.weight\"\r\n\"encoder.graph_encoder.graph_feature.**orf**_encoder.weight\"\r\n\r\nthese are error \r\nMissing key(s), Unexpected key(s), size mismatch \r\n\r\nneed help @clefourrier \r\n\r\nedit : adding num_edges=1536 in config removed size mismatch error",
"I think this should be managed with the `remove_ignore_keys_` and `rename_keys` parts: you need to find what the \"unexpected keys\" in the original checkpoint map to in the new format, and rename them accordingly. In essence, you are going from one format (tokenGT format) to another format (transformers style) for your checkpoint, so you need to do this mapping.\r\n\r\nCongrats on debugging the other error! :clap: ",
"Initially, I had no idea how to map them and to what. I don't even know what they mean. So, I spent some time studying transformers and looking at code.\r\n\r\nsuddenly I thought let's print models\r\nSo, I printed both original models and HFπ€ model \r\n```\r\n print(transformers_model)\r\n print(tokengt_model)\r\n```\r\nand compared the differences.\r\nAccordingly, I added these arguments to the config \r\n```\r\n# config for lap16\r\nconfig = TokenGTConfig(\r\n ...\r\n lap_node_id=True,\r\n lap_node_id_k=16,\r\n id2label = {\"1\":\"className\"}, # I added a dictionary explained below why I did this \r\n type_id=True,\r\n prenorm=True,\r\n ...\r\n)\r\n```\r\nand renamed keys \r\n```\r\nrename_keys = [\r\n (\"encoder.embed_out.weight\", \"decoder.embed_out.weight\"),\r\n\r\n\r\n # I did not find lm_output_learned_bias in models So, I checked code and doing this made most sense \r\n (\"encoder.lm_output_learned_bias\", \"decoder.lm_output_learned_bias\"), \r\n]\r\n```\r\n\r\nDoing this works fine. no error.\r\n\r\n\r\nif I don't do this `id2label = {\"1\":\"className\"}`\r\nputting argument `num_labels = 1` in `config = TokenGTConfig(` has no effect \r\nbecause `num_labels` gets a default value `2` in `PretrainedConfig` (see code below) file (super class of `TokenGTConfig(PretrainedConfig)`)\r\n\r\nwhich would give a size mismatch error \r\n\r\nhttps://github.com/huggingface/transformers/blob/9d1116e9951686f937d17697820117636bfc05a5/src/transformers/configuration_utils.py#L326-L330\r\n",
"It's really great to see your motivation, good job! :sparkles: \r\n\r\nI'll try to check the code to confirm the key renames you made, but I think they do make sense because of the naming changes between the original and new models.\r\n\r\nFor the id2label, I don't think it is such a good idea to modify things outside of the TokenGT files - normally the parent class (`PretrainedConfig`) is overwritten by the child class (`TokenGTConfig`), are you sure this modification is happening here? \r\nI think you could also try changing the `TokenGTConfig` `num_labels` default value to 1 instead of None and see what happens.",
"Yes, I am sure",
"Hi @Raman-Kumar !\r\nI took some time to clean the code a bit and edited some parts, it should be better now for the problems you mentioned. If problems occur in the future, fyi the Graphormer code which was integrated in the lib is quite similar to this one, so you can look at how they are managed there. \r\n\r\nBecause of a mixup on my github I had to create a new PR for this https://github.com/huggingface/transformers/pull/21745 and this is where you'll find the new code. Hoping it helps you! :hugs: ",
"Hi, @clefourrier \r\nI had already figured it out but I was very sick for few days π.\r\n\r\nIn the previous PR, I did three changes after that it printed \"All good :)\"\r\n1. changing `num_labels` to `num_classes` (after that no need to add `id2label` which you suggested not to add) \r\n2. In File models/tokengt/configuration_tokengt.py, \r\n`import torch.nn.functional as F` is missing \r\n3. `decode` name was wrongly written in `TokenGTForGraphClassification` class in forward function \r\n\r\nI was just going to upload the newly created config.json and pytorch_model.bin file to hugging face id. \r\n\r\nNow I will look at new PR and will send changes with Tests and Docs to new PR. \r\n\r\n",
"That sounds good, these changes sound similar to the ones in the new PR. \r\n\r\nI hope you take rest and get better soon :hugs: ",
"Hi, back again\r\nUploaded converted checkpoint and config for \r\nlap - https://huggingface.co/raman-ai/tokengt-base-lap-pcqm4mv2\r\norf - https://huggingface.co/raman-ai/tokengt-base-orf-pcqm4mv2\r\n\r\nNow, I am writing tests, \r\n\r\nI tried to push some changes to [PR](https://github.com/huggingface/transformers/pull/21745) \r\nBut it says like authentication failed, do not have permission etc.\r\n\r\n\r\nHow should I push new commits to your PR? @clefourrier \r\nNeed to add me as a collaborator to your forked repo\r\n\r\nin my terminal \r\n```\r\n$ git remote -v\r\ngithub-desktop-clefourrier https://github.com/clefourrier/transformers.git (fetch)\r\ngithub-desktop-clefourrier https://github.com/clefourrier/transformers.git (push)\r\norigin https://github.com/Raman-Kumar/transformers.git (fetch)\r\norigin https://github.com/Raman-Kumar/transformers.git (push)\r\nupstream https://github.com/huggingface/transformers.git (fetch)\r\nupstream https://github.com/huggingface/transformers.git (push)\r\n```",
"@Raman-Kumar added you to my fork!",
"I created a new PR #22042 just for making a lot of commits and see where circleci do fail. So, I can correct it.\r\nLater I will do a single commit in your PR.\r\n\r\nI have added a new dependency `einops` in setup.py. In entire repo, it's fist time being used in tokengt model.\r\n\r\nI added TokenGTModelIntegrationTest. and now it passes all circleci checks.\r\n\r\nI have a question. @clefourrier \r\nHow to know the shape of inputs `node_data,num_nodes,edge_index,edge_data,edge_num,in_degree,out_degree,lap_eigvec,lap_eigval,labels` \r\nof Tokengt for `ids_tensor()` function?\r\n\r\nLike in Graphormer\r\n```\r\nattn_bias = ids_tensor(\r\n [self.batch_size, self.graph_size + 1, self.graph_size + 1], self.num_atoms\r\n ) # Def not sure here\r\n attn_edge_type = ids_tensor([self.batch_size, self.graph_size, self.graph_size, 1], self.num_edges)\r\n spatial_pos = ids_tensor([self.batch_size, self.graph_size, self.graph_size], self.num_spatial)\r\n in_degree = ids_tensor([self.batch_size, self.graph_size], self.num_in_degree)\r\n out_degree = ids_tensor([self.batch_size, self.graph_size], self.num_out_degree)\r\n input_nodes = ids_tensor([self.batch_size, self.graph_size, 1], self.num_atoms)\r\n input_edges = ids_tensor(\r\n [self.batch_size, self.graph_size, self.graph_size, self.multi_hop_max_dist, 1], self.num_edges\r\n )\r\n labels = ids_tensor([self.batch_size], self.num_classes)\r\n```\r\n",
"Ok, great for the PR, and congrats for the tests!\r\nFor einops, do you need a lot of code? It would be better to copy paste the functions we will need (citing them and if the license allows ofc) as we only allow new dependencies for very specific cases.\r\n\r\nFor TokenGT, are you talking about the shape of inputs provided to the test suite?\r\nMost attributes will have the same shape as for Graphormer (`batch_size` in position one, then `graph_size` or linked to it for inputs which look over the whole graph, like those pertaining to edges/nodes (includes the degrees for example)). The collation function should be able to help you with the specifics, since the shape must be provided there. Last resort, to confirm your intuition, you can also print all the dimensions for the elements you want. \r\n",
"What is the current status of TokenGT on Hugging Face? Is it possible to use this for token/node classification tasks? If so, could someone point me to a good starting point or example for figuring that out? I would love to try to use this on protein data through Hugging Face for node/token classification :)",
"Hi @Amelie-Schreiber !\r\nRaman has been working on this integration in their spare time, but I don't think it's complete yet.\r\nOne of the latest PRs was [here](https://github.com/huggingface/transformers/pull/21745) if you want to take a look too :)",
"Hey, I am resuming this. Lost touch for sometime time.\r\nWill further contribute to it.\r\n\r\n@clefourrier May ask question, if stuck",
"Cool! \r\nFeel free to ask questions! I'm no longer actively working on graphs but I'll do my best to answer in reasonable delays.",
"How is it going now?it works?π«₯"
] | 1,673
| 1,706
| null |
MEMBER
| null |
### Model description
Adding the TokenGT graph transformer model with @Raman-Kumar (see [Graphormer issue](https://github.com/huggingface/transformers/issues/20962#issuecomment-1375361519))
@Raman-Kumar I'll create a PR with what I had ported of TokenGT at the end of the week, to give you a starting point! You'll need to read [this](https://huggingface.co/docs/transformers/add_new_model) first, to get an idea of the steps we follow when integrating a model.
Then, 1st step will be checking the code with a checkpoint, so you need to look for one and download it, to compare results with the original implementation.
Does that work for you?
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21079/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21079/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21078
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21078/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21078/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21078/events
|
https://github.com/huggingface/transformers/issues/21078
| 1,527,865,202
|
I_kwDOCUB6oc5bEV9y
| 21,078
|
batched generate using forced_decoder_ids
|
{
"login": "ghadiaravi13",
"id": 40660742,
"node_id": "MDQ6VXNlcjQwNjYwNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/40660742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghadiaravi13",
"html_url": "https://github.com/ghadiaravi13",
"followers_url": "https://api.github.com/users/ghadiaravi13/followers",
"following_url": "https://api.github.com/users/ghadiaravi13/following{/other_user}",
"gists_url": "https://api.github.com/users/ghadiaravi13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghadiaravi13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghadiaravi13/subscriptions",
"organizations_url": "https://api.github.com/users/ghadiaravi13/orgs",
"repos_url": "https://api.github.com/users/ghadiaravi13/repos",
"events_url": "https://api.github.com/users/ghadiaravi13/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghadiaravi13/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"Hey @ghadiaravi13 π \r\n\r\nI see no major technical hurdles to implementing the feature. However, before we go that route, what is the use case you expect from that feature? (As any increase in complexity, we should make sure it is done for a good reason :) )",
"Hi @gante!\r\n\r\nThe expected usage is when we use generate function over a batch of inputs, and want to force the decoder output for each input in the batch. Having batch support rather than iterating manually will have computational benefits, I suppose. You may correct me though.",
"@ghadiaravi13 `forced_decoder_ids` should already work at batch level, assuming you want the same forced tokens for all members of the batch. Doesn't this solve your use case?\r\n\r\nThe alternative, where each member of the batch has its own `forced_decoder_ids`, requires significantly increasing the complexity of the code. As such, to include it in `transformers`, we need some demonstration that it is a valued feature :)",
"Yes I was referring to the latter case actually. Sure I could imagine the increase in code complexity, just wanted to check. I'll stick with manually iterating for now. Thanks for responding!",
"> @ghadiaravi13 `forced_decoder_ids` should already work at batch level, assuming you want the same forced tokens for all members of the batch. Doesn't this solve your use case?\r\n> \r\n> The alternative, where each member of the batch has its own `forced_decoder_ids`, requires significantly increasing the complexity of the code. As such, to include it in `transformers`, we need some demonstration that it is a valued feature :)\r\n\r\nI think it is crucial for cases when you want to force different prompt for member in the batch, e.g.: Training Whisper on transcribe and translate tasks in the same dataset. In this case, some members in the batch needs to use the transcribe token forced and some the translate token.\r\n\r\nHow is it possible to solve it otherwise? ",
"`.generate()` is not used at training time, so that question doesn't apply. See our [blog post on fine-tuning Whisper](https://huggingface.co/blog/fine-tune-whisper#prepare-feature-extractor-tokenizer-and-data) for further reference.\r\n\r\nAt inference time, it is possible to build a solution to handle both tasks at once. However, the benefits are small (vs separating different tasks in different data batches) and we'd have the burden of long-term maintenance of the code. I'd still encourage you to build your own custom `LogitsProcessor` to solve the problem if it is relevant to your use case -- we've built a modular codebase precisely so anyone can easily build their custom solutions without depending on us π€ \r\n\r\nFinally, I'd like to mention that [Whisper has its own `.generate()` function](https://github.com/huggingface/transformers/blob/d3046dad809b7b10019b142ae20b49fb58d21c28/src/transformers/models/whisper/modeling_whisper.py#L1232) that easily abstracts the parameterization for each task.\r\n"
] | 1,673
| 1,675
| 1,674
|
NONE
| null |
### Feature request
Currently, forced_decoder_ids only accepts a single list of [index,token_id] to force the decoder output given an input. However, it does not support batched output forcing, where input itself is a batch. Can we have this support to have forced_decoder_ids = List(List(List([int,int]))) where 0th dimension corresponds to batch dimension?
### Motivation
This will help forcing the output for different inputs simultaneously.
### Your contribution
I could help submit a PR, wanted to first understand the feasibility of feature request and/or other implications.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21078/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21077
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21077/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21077/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21077/events
|
https://github.com/huggingface/transformers/issues/21077
| 1,527,663,887
|
I_kwDOCUB6oc5bDk0P
| 21,077
|
TypeError (NumPy concatenation) in modeling_wav2vec2 at _sample_negative_indices
|
{
"login": "anautsch",
"id": 2925439,
"node_id": "MDQ6VXNlcjI5MjU0Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2925439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anautsch",
"html_url": "https://github.com/anautsch",
"followers_url": "https://api.github.com/users/anautsch/followers",
"following_url": "https://api.github.com/users/anautsch/following{/other_user}",
"gists_url": "https://api.github.com/users/anautsch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anautsch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anautsch/subscriptions",
"organizations_url": "https://api.github.com/users/anautsch/orgs",
"repos_url": "https://api.github.com/users/anautsch/repos",
"events_url": "https://api.github.com/users/anautsch/events{/privacy}",
"received_events_url": "https://api.github.com/users/anautsch/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @anautsch! Thanks for opening this issue π€ Would it be possible to provide a reproducible code snippet that only uses `transformers`? This way we can pinpoint the exact issue in the library.\r\n\r\nIn an attempt to try and reproduce your issue, I tested the Wav2Vec2 pretraining script from the `transformers` library: \r\n[run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py). However, I was not able to reproduce the error you encountered; both the script and model worked fine without the aforementioned numpy concatenation error.\r\n\r\nIn these tests, I used the ['base' Wav2Vec2 model](https://huggingface.co/patrickvonplaten/wav2vec2-base-v2/blob/main/config.json) and pre-trained on a [dummy dataset](https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy/tree/main) consisting of 73 samples from the LibriSpeech ASR corpus (~9MB of data). \r\n\r\nYou can reproduce this dummy run using the following command:\r\n```\r\naccelerate launch run_wav2vec2_pretraining_no_trainer.py \\\r\n\t--dataset_name=\"hf-internal-testing/librispeech_asr_dummy\" \\\r\n\t--dataset_config_name=\"clean\" \\\r\n\t--dataset_split_names validation \\\r\n\t--model_name_or_path=\"patrickvonplaten/wav2vec2-base-v2\" \\\r\n\t--output_dir=\"./wav2vec2-pretrain-issue\" \\\r\n\t--num_train_epoch=\"1\" \\\r\n\t--max_duration_in_seconds=\"20.0\" \\\r\n\t--per_device_train_batch_size=\"8\" \\\r\n\t--per_device_eval_batch_size=\"8\" \\\r\n\t--validation_split_percentage=\"10\" \\\r\n\t--gradient_checkpointing\r\n```\r\n**Print Output:**\r\n```\r\nGradients have overflown - skipping update step... Updating gradient scale to 65536.0... \r\nGradients have overflown - skipping update step... Updating gradient scale to 32768.0... \r\nGradients have overflown - skipping update step... Updating gradient scale to 16384.0... \r\nGradients have overflown - skipping update step... Updating gradient scale to 8192.0... \r\nGradients have overflown - skipping update step... Updating gradient scale to 4096.0... \r\nGradients have overflown - skipping update step... Updating gradient scale to 2048.0... \r\n| val_loss: 4.825e+00| val_contrastive_loss: 4.732e+00| val_diversity_loss: 9.208e-01| val_num_losses: 1.000e+00 \r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7/7 [00:09<00:00, 1.08it/s]\r\nConfiguration saved in ./wav2vec2-pretrain-issue/config.json\r\nModel weights saved in ./wav2vec2-pretrain-issue/pytorch_model.bin\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7/7 [00:11<00:00, 1.63s/it]\r\n```\r\n\r\nAs you can see, the script and model worked fine for me here! If you can provide a similar code snippet that demonstrates your issue that would be grand!",
"Hi @sanchit-gandhi thank you for providing an alternate example (I couldn't get it running right away). But thanks for nudging me towards a minimal example. After looking at the inputs, it turned out we were giving as input arguments for `(batch_size, sequence_length)` a `(2, tensor(157))` mix. It worked when using `.item()` on a previous variable, so the input argument is `(2, 157)` instead.",
"Hey @anautsch! Very glad to hear that - best of luck with your development!"
] | 1,673
| 1,674
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.27
- Python version: 3.9.15
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes - Driver Version: 495.29.05 CUDA Version: 11.5
- Using distributed or parallel set-up in script?: no
### Who can help?
transformers library
@patrickvonplaten
### Information
- The official example scripts
- [X] My own modified scripts
### Tasks
- An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Error log
```
speechbrain.utils.checkpoints - Would load a checkpoint here, but none found yet.
speechbrain.utils.epoch_loop - Going into epoch 1
speechbrain.core - Exception:
Traceback (most recent call last):
File "recipes/CommonVoice/self-supervised-learning/wav2vec2/train_hf_wav2vec2.py", line 357, in <module>
asr_brain.fit(
File "speechbrain/core.py", line 1207, in fit
self._fit_train(train_set=train_set, epoch=epoch, enable=enable)
File "speechbrain/core.py", line 1060, in _fit_train
loss = self.fit_batch(batch)
File recipes/CommonVoice/self-supervised-learning/wav2vec2/train_hf_wav2vec2.py", line 111, in fit_batch
predictions = self.compute_forward(batch, sb.Stage.TRAIN)
File "recipes/CommonVoice/self-supervised-learning/wav2vec2/train_hf_wav2vec2.py", line 54, in compute_forward
out, mask = self.modules.wav2vec2(wavs)
File "torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "speechbrain/lobes/models/huggingface_wav2vec.py", line 434, in forward
transformers.models.wav2vec2.modeling_wav2vec2._sample_negative_indices(
File "transformers/models/wav2vec2/modeling_wav2vec2.py", line 285, in _sample_negative_indices
sampled_negative_indices[batch_idx] += batch_idx * sequence_length
TypeError: Concatenation operation is not implemented for NumPy arrays, use np.concatenate() instead. Please do not rely on this error; it may not be given on all Python implementations.
```
where
```
numpy 1.23.4
scipy 1.8.1
```
This issue came up during [reworking testing for SpeechBrain](https://github.com/speechbrain/speechbrain/pull/1600). As part of refactoring & expanding our integration of the HuggingFace transformers library, we ensured to have testing for all SpeechBrain recipes. After lifting an extra_dependency restriction, this error occured.
What was changed?
https://github.com/speechbrain/speechbrain/blob/801b1501b0bde2a940fcb71af44b69b07eafb9f5/recipes/CommonVoice/self-supervised-learning/wav2vec2/extra_requirements.txt#L1
to
https://github.com/anautsch/speechbrain/blob/b7e1b02a8cb3be81640c40c23a99d5af646a24e5/recipes/CommonVoice/self-supervised-learning/wav2vec2/extra_requirements.txt#L1
How to reproduce?
a) either run the [recipe script](https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonVoice/self-supervised-learning/wav2vec2) from scratch (might be too hardware intense)
b) or use a testing tool, we created that runs the recipe in a very light debug mode
To use the recipe testing, please create an environment using this SpeechBrain version (from our mentioned PR).
```
git clone https://github.com/anautsch/speechbrain.git
cd speechbrain
git checkout refactor-recipe-testing
pip install -r requirements.txt
pip install transformers==4.25.1 huggingface-hub==0.11.1 datasets==2.7.1
pip install --editable .
```
The particular recipe can then be tested using this command:
```
python -c 'from tests.utils.recipe_tests import run_recipe_tests; print("TEST FAILED!") if not(run_recipe_tests(filters_fields=["Hparam_file"], filters=[["recipes/CommonVoice/self-supervised-learning/wav2vec2/hparams/wav2vec2_base.yaml"]], do_checks=False, run_opts="--device=cuda")) else print("TEST PASSED")'
```
This will result in
```
(1/1) Running test for CommonVoice_row_18...
ERROR: Error in CommonVoice_row_18 (recipes/CommonVoice/self-supervised-learning/wav2vec2/hparams/wav2vec2_base.yaml). Check tests/tmp/CommonVoice_row_18/stderr.txt and tests/tmp/CommonVoice_row_18/stdout.txt for more info.
```
and the above stacktrace is available through either `cat`
```
cat tests/tmp/CommonVoice_row_18/std*
cat tests/tmp/CommonVoice_row_18/log.txt
```
I started an [issue on our end](https://github.com/speechbrain/speechbrain/issues/1787) as part of keeping track of all issues that surfaced during testing all SpeechBrain recipes. There, the suggestion is to reintroduce the dependency restriction `transformers==4.15`.
### Expected behavior
It would be great to lift all extra_dependency restrictions in SpeechBrain recipes and move them to the latest versions, e.g., `transformers>=4.25.2` instead of fixing it to a specific & older version (v4.15 dates back to Dec 22, 2021).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21077/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21076
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21076/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21076/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21076/events
|
https://github.com/huggingface/transformers/issues/21076
| 1,527,346,789
|
I_kwDOCUB6oc5bCXZl
| 21,076
|
Pushing T5ForConditionalGeneration to hub
|
{
"login": "GravermanDev",
"id": 76252605,
"node_id": "MDQ6VXNlcjc2MjUyNjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/76252605?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GravermanDev",
"html_url": "https://github.com/GravermanDev",
"followers_url": "https://api.github.com/users/GravermanDev/followers",
"following_url": "https://api.github.com/users/GravermanDev/following{/other_user}",
"gists_url": "https://api.github.com/users/GravermanDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GravermanDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GravermanDev/subscriptions",
"organizations_url": "https://api.github.com/users/GravermanDev/orgs",
"repos_url": "https://api.github.com/users/GravermanDev/repos",
"events_url": "https://api.github.com/users/GravermanDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/GravermanDev/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It will be very hard to help you without a reproducible example. `T5ForConditionalGeneration` is a `PreTrainedModel` and so does have a `push_to_hub` method.",
"Oh, Iβm sorry, @sgugger, here is my code\r\nhttps://colab.research.google.com/drive/1a59B4e8AooTFkUeMOs5Zsa0DIBWGM3k6?usp=sharing\r\n\r\nthe relevant part is at the bottom",
"during making a minimal example, the code started working, sorry for wasting your time, have a great day!"
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`trained_model.model.push_to_hub("model")`
```AttributeError Traceback (most recent call last)
[<ipython-input-21-b193d55bd628>](https://localhost:8080/#) in <module>
----> 1 trained_model.model.push_to_hub("model")
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in __getattr__(self, name)
1263 if name in modules:
1264 return modules[name]
-> 1265 raise AttributeError("'{}' object has no attribute '{}'".format(
1266 type(self).__name__, name))
1267
AttributeError: 'T5ForConditionalGeneration' object has no attribute 'push_to_hub'
```
### Expected behavior
I want to push T5ForConditionalGeneration to te hub, it doesn't work, I don't know if you need more info
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21076/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21075
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21075/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21075/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21075/events
|
https://github.com/huggingface/transformers/issues/21075
| 1,526,712,521
|
I_kwDOCUB6oc5a_8jJ
| 21,075
|
CompVis/stable-diffusion-v1-4 does not appear to have a file named tokenizer/config.json
|
{
"login": "QiuLL",
"id": 1172763,
"node_id": "MDQ6VXNlcjExNzI3NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1172763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QiuLL",
"html_url": "https://github.com/QiuLL",
"followers_url": "https://api.github.com/users/QiuLL/followers",
"following_url": "https://api.github.com/users/QiuLL/following{/other_user}",
"gists_url": "https://api.github.com/users/QiuLL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QiuLL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QiuLL/subscriptions",
"organizations_url": "https://api.github.com/users/QiuLL/orgs",
"repos_url": "https://api.github.com/users/QiuLL/repos",
"events_url": "https://api.github.com/users/QiuLL/events{/privacy}",
"received_events_url": "https://api.github.com/users/QiuLL/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The following sample runs without any issue:\r\n```py\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n \"CompVis/stable-diffusion-v1-4\",\r\n subfolder=\"tokenizer\",\r\n use_fast=False,\r\n)\r\n```\r\n\r\nPlease include a code reproducer of your issue.",
"> The following sample runs without any issue:\r\n> \r\n> ```python\r\n> from transformers import AutoTokenizer\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\r\n> \"CompVis/stable-diffusion-v1-4\",\r\n> subfolder=\"tokenizer\",\r\n> use_fast=False,\r\n> )\r\n> ```\r\n> \r\n> Please include a code reproducer of your issue.\r\n\r\n`from transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n \"CompVis/stable-diffusion-v1-4\",\r\n subfolder=\"tokenizer\",\r\n use_fast=False,\r\n)`\r\n\r\n### **I used your codes as above, but still get the same error:**\r\n(diffusion) qll@longyuan:/data/qll/ColossalAI_2/ColossalAI/examples/images/dreambooth$ python test.py \r\nTraceback (most recent call last):\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py\", line 239, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/tokenizer/config.json\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/utils/hub.py\", line 408, in cached_file\r\n resolved_file = hf_hub_download(\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 124, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/file_download.py\", line 1067, in hf_hub_download\r\n metadata = get_hf_file_metadata(\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 124, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/file_download.py\", line 1376, in get_hf_file_metadata\r\n hf_raise_for_status(r)\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py\", line 257, in hf_raise_for_status\r\n raise EntryNotFoundError(message, response) from e\r\nhuggingface_hub.utils._errors.EntryNotFoundError: 404 Client Error. (Request ID: Root=1-63be11d8-73a357726aa9511757d467c4)\r\n\r\nEntry Not Found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/tokenizer/config.json.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/data/qll/ColossalAI_2/ColossalAI/examples/images/dreambooth/test.py\", line 3, in <module>\r\n tokenizer = AutoTokenizer.from_pretrained(\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py\", line 564, in from_pretrained\r\n config = AutoConfig.from_pretrained(\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py\", line 746, in from_pretrained\r\n config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/configuration_utils.py\", line 553, in get_config_dict\r\n config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/configuration_utils.py\", line 608, in _get_config_dict\r\n resolved_config_file = cached_file(\r\n File \"/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/utils/hub.py\", line 453, in cached_file\r\n raise EnvironmentError(\r\nOSError: CompVis/stable-diffusion-v1-4 does not appear to have a file named tokenizer/config.json. Checkout 'https://huggingface.co/CompVis/stable-diffusion-v1-4/main' for available files.\r\n",
"> The following sample runs without any issue:\r\n> \r\n> ```python\r\n> from transformers import AutoTokenizer\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\r\n> \"CompVis/stable-diffusion-v1-4\",\r\n> subfolder=\"tokenizer\",\r\n> use_fast=False,\r\n> )\r\n> ```\r\n> \r\n> Please include a code reproducer of your issue.\r\n\r\ntransformers 4.22.2\r\ntorch 1.12.1+cu113\r\n",
"You should upgrade to the latest version of Transformers, this is probably why I don't have the bug on my side, it has been fixed.",
"> You should upgrade to the latest version of Transformers, this is probably why I don't have the bug on my side, it has been fixed.\r\n\r\nthx, it works by upgrading to the latest version of Transformers!"
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
### System Info
when I use AutoTokenizer to load tokenizerοΌuse the code belowοΌ
tokenizer = transformers.AutoTokenizer.from_pretrained(
args.pretrained_model_name_or_path,
subfolder="tokenizer",
revision=args.revision,
use_fast=False,
)
but I found can't get the right tokenizer_config.json file. Indeed the functions try to find config.json file instead of tokenizer_config.json file. So i don't know how to solve it.
β β
β /home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/models/auto/tokeniz β
β ation_auto.py:564 in from_pretrained β
β β
β 561 β β # If that did not work, let's try to use the config. β
β 562 β β if config_tokenizer_class is None: β
β 563 β β β if not isinstance(config, PretrainedConfig): β
β β± 564 β β β β config = AutoConfig.from_pretrained( β
β 565 β β β β β pretrained_model_name_or_path, trust_remote_code=trust_remote_code, β
β 566 β β β β ) β
β 567 β β β config_tokenizer_class = config.tokenizer_class β
β β
β /home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/models/auto/configu β
β ration_auto.py:746 in from_pretrained β
β β
β 743 β β kwargs["_from_auto"] = True β
β 744 β β kwargs["name_or_path"] = pretrained_model_name_or_path β
β 745 β β trust_remote_code = kwargs.pop("trust_remote_code", False) β
β β± 746 β β config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_n β
β 747 β β if "auto_map" in config_dict and "AutoConfig" in config_dict["auto_map"]: β
β 748 β β β if not trust_remote_code: β
β 749 β β β β raise ValueError( β
β β
β /home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/configuration_utils β
β .py:553 in get_config_dict β
β β
β 550 β β """ β
β 551 β β original_kwargs = copy.deepcopy(kwargs) β
β 552 β β # Get config dict associated with the base config file β
β β± 553 β β config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwar β
β 554 β β if "_commit_hash" in config_dict: β
β 555 β β β original_kwargs["_commit_hash"] = config_dict["_commit_hash"] β
β 556 β
β β
β /home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/configuration_utils β
β .py:608 in _get_config_dict β
β β
β 605 β β β β
β 606 β β β try: β
β 607 β β β β # Load from local folder or from cache or download from model Hub and ca β
β β± 608 β β β β resolved_config_file = cached_file( β
β 609 β β β β β pretrained_model_name_or_path, β
β 610 β β β β β configuration_file, β
β 611 β β β β β cache_dir=cache_dir, β
β β
β /home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/utils/hub.py:453 in β
β cached_file β
β β
β 450 β β β return None β
β 451 β β if revision is None: β
β 452 β β β revision = "main" β
β β± 453 β β raise EnvironmentError( β
β 454 β β β f"{path_or_repo_id} does not appear to have a file named {full_filename}. Ch β
β 455 β β β f"'https://huggingface.co/{path_or_repo_id}/{revision}' for available files. β
β 456 β β ) β
β°βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―
OSError: CompVis/stable-diffusion-v1-4 does not appear to have a file named tokenizer/config.json. Checkout 'https://huggingface.co/CompVis/stable-diffusion-v1-4/main' for available files.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
tokenizer = transformers.AutoTokenizer.from_pretrained(
args.pretrained_model_name_or_path,
subfolder="tokenizer",
revision=args.revision,
use_fast=False,
)
### Expected behavior
Can someone help me ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21075/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21074
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21074/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21074/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21074/events
|
https://github.com/huggingface/transformers/pull/21074
| 1,526,695,230
|
PR_kwDOCUB6oc5HCOYg
| 21,074
|
[WIP]add transformer transducer
|
{
"login": "jp1924",
"id": 93233241,
"node_id": "U_kgDOBY6gWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jp1924",
"html_url": "https://github.com/jp1924",
"followers_url": "https://api.github.com/users/jp1924/followers",
"following_url": "https://api.github.com/users/jp1924/following{/other_user}",
"gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jp1924/subscriptions",
"organizations_url": "https://api.github.com/users/jp1924/orgs",
"repos_url": "https://api.github.com/users/jp1924/repos",
"events_url": "https://api.github.com/users/jp1924/events{/privacy}",
"received_events_url": "https://api.github.com/users/jp1924/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @jp1924! As discussed on the issue https://github.com/huggingface/transformers/issues/20961#issuecomment-1382245091, it's not possible to add a model to `transformers` when official weights are not available (please refer to the message thread for more details).\r\n\r\nI would advise that you focus the implementation on a Transformer-Transducer codebase where strong pre-trained weights are available and open-sourced. I'm more than happy to help find a suitable codebase + weights to port! This would be a valuable addition to the `transformers` library.",
"hey @sanchit-gandhi\r\n\r\nI saw several papers(like [MS T-T](https://arxiv.org/pdf/1910.12977.pdf), [Google T-T](https://arxiv.org/pdf/2002.02562.pdf), [Facebook T-T](https://arxiv.org/pdf/2010.11395.pdf)) related to the T-T model, but there is no content related to the official github code in the paper like BERT. So, while looking for an alternative, i found Transformer-Transducer implemented in a library called [openspeech](https://github.com/openspeech-team/openspeech). \r\n\r\nThe model weight is not disclosed, but there is a code that can train [T-T](https://github.com/openspeech-team/openspeech/tree/main/openspeech/models/transformer_transducer). So I'm thinking of using openspeech to get the weight of the T-T first and then transferring the model and code to the hugingface, is it possible?",
"Hey @jp1924!\r\n\r\nCool that you've been able to dig so deeply into the ASR literature! Indeed, these are all fantastic research papers that highlight the strong performance of the T-T architecture. It's a real pity that neither MS, Google nor Meta released open-sourced weights for these papers, as I think a performant T-T model would be of great use to the open-source community, especially for low-latency/on-device applications.\r\n\r\nUnfortunately, again with OpenSpeech it's an unofficial implementation without weights, so we probably can't add this to `transformers` either.\r\n\r\nI'm really sorry you've invested time here without luck finding a performant set of open-source T-T weights. I had a look through your wandb logs, it does look as though the model is working. We can leave this PR open if you want to continue iterating and provide updates, but we won't be able to merge it to `transformers` without weights from a well established research paper (e.g. from MS, Google, Meta, etc)",
"Hey @sanchit-gandhi @flozi00 @fxtentacle @YooSungHyun!\r\n\r\nThank you so much for your interest in the T-T model! Unfortunately, it looks like we'll have to close the PR.....\r\n\r\nI understand that this sophistication and prudence makes Transformers even better! Most of all, it was really nice to have access to Transformers' philosophy and new features!\r\n\r\n---\r\nHey @sanchit-gandhi!\r\n\r\nI have a question about PR. What is the range of the official code and weight? \r\n\r\nI think the emformer of #17302 is an example. The emformer paper does not have an official github code & weight, such as BERT. \r\n\r\nHowever, the emformer code and weight have been uploaded to torchaudio. So my question is, can I contribute to Transformer by using the code and weight in the certified library even if it's not the code listed in the official paper?",
"Hey @jp1924! Great question! The short answer is: code and weights in a certified library = yes! Just code = no"
] | 1,673
| 1,677
| 1,676
|
NONE
| null |
# What does this PR do?
#20961
This PR adds [Transformer-Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss](https://arxiv.org/abs/2002.02562)
This model is streaming model that model recognize text from audio in real-time and There is no site where model weight has been uploaded.
Transformer-Transducer implement: https://github.com/jp1924/transformer-transducer
RNN-Transducer reference:Β [https://lorenlugosch.github.io/posts/2020/11/transducer/](https://lorenlugosch.github.io/posts/2020/11/transducer/)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Models: speech models
@sanchit-gandhi
Library: generate
@gante
Maintained examples: pytorch
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21074/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21074",
"html_url": "https://github.com/huggingface/transformers/pull/21074",
"diff_url": "https://github.com/huggingface/transformers/pull/21074.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21074.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21073
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21073/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21073/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21073/events
|
https://github.com/huggingface/transformers/issues/21073
| 1,526,561,139
|
I_kwDOCUB6oc5a_Xlz
| 21,073
|
Pre-trained tokenizer `repr` is inconsistent with attribute name
|
{
"login": "inwaves",
"id": 8530685,
"node_id": "MDQ6VXNlcjg1MzA2ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8530685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inwaves",
"html_url": "https://github.com/inwaves",
"followers_url": "https://api.github.com/users/inwaves/followers",
"following_url": "https://api.github.com/users/inwaves/following{/other_user}",
"gists_url": "https://api.github.com/users/inwaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/inwaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/inwaves/subscriptions",
"organizations_url": "https://api.github.com/users/inwaves/orgs",
"repos_url": "https://api.github.com/users/inwaves/repos",
"events_url": "https://api.github.com/users/inwaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/inwaves/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The `PreTrainedTokenizerFast` and `PreTrainedTokenizerBase` are abstract classes and should not really be used. The `model_max_len` is a vestige of a previous argument, opening a PR to fix this typo as indeed the attribute does not exist. "
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.3 (cpu)
- Jax version: 0.4.1
- JaxLib version: 0.4.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
Hi @ArthurZucker, since this is tokeniser-related, do you mind having a look?
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This one's pretty straightforward:
1. Using a pre-trained tokeniser (`PreTrainedTokenizerFast` or `PreTrainedTokenizerBase`), print out the object.
2. The `repr`, which is defined in [`tokenization_utils_base`](https://github.com/huggingface/transformers/blob/a3c37825cc1e305dde63455b5f321586e6d29e07/src/transformers/tokenization_utils_base.py#L1573), returns something like this:
`PreTrainedTokenizerFast(name_or_path='gpt2', vocab_size=50257, model_max_len=1024, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '[PAD]'})`
3. Note the `model_max_len` attribute.
### Expected behavior
The repr should display `model_max_length=1024` instead, since that is the actual name of the attribute. Other attribute labels in the repr seem consistent with the name, which leads me to believe this is a typo.
I came across this because I printed out the object, and then tried to access that tokeniser's `model_max_len`, which of course errors out since there's no attribute with that name.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21073/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21072
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21072/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21072/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21072/events
|
https://github.com/huggingface/transformers/pull/21072
| 1,526,539,751
|
PR_kwDOCUB6oc5HBvB9
| 21,072
|
Fix header level
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
MEMBER
| null |
Fixes header level for the last two sections in the pipeline tutorial.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21072/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21072",
"html_url": "https://github.com/huggingface/transformers/pull/21072",
"diff_url": "https://github.com/huggingface/transformers/pull/21072.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21072.patch",
"merged_at": 1673375050000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21071
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21071/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21071/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21071/events
|
https://github.com/huggingface/transformers/pull/21071
| 1,526,529,815
|
PR_kwDOCUB6oc5HBs-A
| 21,071
|
Fix git model for generate with beam search.
|
{
"login": "PeterL1n",
"id": 7651753,
"node_id": "MDQ6VXNlcjc2NTE3NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7651753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterL1n",
"html_url": "https://github.com/PeterL1n",
"followers_url": "https://api.github.com/users/PeterL1n/followers",
"following_url": "https://api.github.com/users/PeterL1n/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterL1n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterL1n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterL1n/subscriptions",
"organizations_url": "https://api.github.com/users/PeterL1n/orgs",
"repos_url": "https://api.github.com/users/PeterL1n/repos",
"events_url": "https://api.github.com/users/PeterL1n/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterL1n/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey, there seems to be an issue with the Circle CI. Will ping @sgugger for this. \r\nCan you add the error you are getting on your issue? \r\nI think we should also add a test to make sure that this model is ran, the fix LGTM π thanks",
"It seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?",
"Thanks! Can you now run `make style` to fix the test failure you see on the CI?",
"@NielsRogge Please help adding the test. I am having limited capacity here."
] | 1,673
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21070
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada @gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21071/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21071",
"html_url": "https://github.com/huggingface/transformers/pull/21071",
"diff_url": "https://github.com/huggingface/transformers/pull/21071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21071.patch",
"merged_at": 1674052824000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21070
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21070/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21070/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21070/events
|
https://github.com/huggingface/transformers/issues/21070
| 1,526,527,743
|
I_kwDOCUB6oc5a_Pb_
| 21,070
|
GIT does not work with beam search
|
{
"login": "PeterL1n",
"id": 7651753,
"node_id": "MDQ6VXNlcjc2NTE3NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7651753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterL1n",
"html_url": "https://github.com/PeterL1n",
"followers_url": "https://api.github.com/users/PeterL1n/followers",
"following_url": "https://api.github.com/users/PeterL1n/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterL1n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterL1n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterL1n/subscriptions",
"organizations_url": "https://api.github.com/users/PeterL1n/orgs",
"repos_url": "https://api.github.com/users/PeterL1n/repos",
"events_url": "https://api.github.com/users/PeterL1n/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterL1n/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,673
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help?
@gante @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Take the script from official doc (https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM)
Add `num_beams=3`. Git model will report error.
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import requests
from PIL import Image
processor = AutoProcessor.from_pretrained("microsoft/git-base-coco")
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
pixel_values = processor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values=pixel_values, max_length=50, num_beams=3)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)
```
### Expected behavior
GIT model should work with beam search.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21070/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21070/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21069
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21069/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21069/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21069/events
|
https://github.com/huggingface/transformers/pull/21069
| 1,526,440,819
|
PR_kwDOCUB6oc5HBaYk
| 21,069
|
Update squad.py
|
{
"login": "sammys377",
"id": 99053593,
"node_id": "U_kgDOBedwGQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99053593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammys377",
"html_url": "https://github.com/sammys377",
"followers_url": "https://api.github.com/users/sammys377/followers",
"following_url": "https://api.github.com/users/sammys377/following{/other_user}",
"gists_url": "https://api.github.com/users/sammys377/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammys377/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammys377/subscriptions",
"organizations_url": "https://api.github.com/users/sammys377/orgs",
"repos_url": "https://api.github.com/users/sammys377/repos",
"events_url": "https://api.github.com/users/sammys377/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammys377/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21069). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,676
| 1,676
|
NONE
| null |
# What does this PR do?
Fix a bug for the Splinter Tokenizer to account for the extra [QUESTION] and period token.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21069/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21069",
"html_url": "https://github.com/huggingface/transformers/pull/21069",
"diff_url": "https://github.com/huggingface/transformers/pull/21069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21069.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21068
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21068/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21068/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21068/events
|
https://github.com/huggingface/transformers/issues/21068
| 1,526,439,824
|
I_kwDOCUB6oc5a-5-Q
| 21,068
|
DeBerta Wrong Dimension for MLM Prediction Head
|
{
"login": "zanussbaum",
"id": 33707069,
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zanussbaum",
"html_url": "https://github.com/zanussbaum",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Not sure I understand where you found that we only have from `hidden_size` to `hidden_size`, but for `MLM`, we use the `DebertaOnlyMLMHead`, which uses a `decoder` that is the prediction head. See `self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta/modeling_deberta.py#L1146)",
"Ah sorry I forgot to include this line!\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L872\r\nIt looks like it checks for an attribute `embedding_size` in the config, but defaults to `hidden_size` if not found. \r\n\r\nIt doesnβt seem like that attribute is present in the config though? Am I setting up the model incorrectly?",
"Okay, again the head that you mentioned : `DebertaPredictionHeadTransform` is just a head, and it does not use the `DebertaV2Embedding`. It is only used as such (and not as an entire model). The size is correct as what you are looking for it in the `DebertaForMaskedLM`. The `self.model` attribute encompasses the `embedding` layer. ",
"Hm okay thanks for the help! Seems I misunderstood"
] | 1,673
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
### System Info
From the [original implementation](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/mlm.py#L21), the MLM head transforms the hidden dimension to the embedding dimension.
However, it seems that in the HF version, we go from `hidden_size` to `hidden_size`. Shouldn't it be from `hidden_size` to `embedding_size`, especially since the [embeddings get tied](https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/modeling_utils.py#L1203) eventually?
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
` DebertaPredictionHeadTransform.dense.weight` should be of size `hidden_size, embedding_size`, not `hidden_size, hidden_size`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21068/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21067
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21067/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21067/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21067/events
|
https://github.com/huggingface/transformers/pull/21067
| 1,526,381,472
|
PR_kwDOCUB6oc5HBNYX
| 21,067
|
Update task summary
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ok, I'm finally finished with the first draft (took a bit longer to learn some models I wasn't familiar with)! I'd appreciate a general review of the scope of this page to make sure we're aligned (ie, are some sections too in-depth, are some not explained well enough?). Thanks in advance @sgugger @MKhalusova ! π₯Ή\r\n\r\nAfterward, I'll ping one of our audio and computer vision experts for a more in-depth review of those sections π ",
"Thanks for the feedback, I added some images to go along with the text!\r\n\r\n@NielsRogge, would you mind reviewing the computer vision section? This guide is a high-level overview, and the goal is to help users understand how a certain task is solved by a model. Please feel free to let me know if it's too detailed, not detailed enough, or if I got something wrong! Also, if you know of a good beginner's resource for computer vision we can link to, that'd be great as well to set expectations for the reader. Thanks! π\r\n\r\n@sanchit-gandhi, if you could do the same with the audio section, that'd be awesome. Thank you! π"
] | 1,673
| 1,675
| 1,675
|
MEMBER
| null |
This is the second part of updating the task summary to be more conceptual. After a brief introduction and background to the tasks Transformers can solve in [part 1](https://github.com/huggingface/transformers/pull/21014), this PR is a bit more advanced and digs deeper into explaining how Transformer solves these tasks.
### To-do:
- [x] Add computer vision section
- [x] Add NLP section
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21067/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21067",
"html_url": "https://github.com/huggingface/transformers/pull/21067",
"diff_url": "https://github.com/huggingface/transformers/pull/21067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21067.patch",
"merged_at": 1675366887000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21066
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21066/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21066/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21066/events
|
https://github.com/huggingface/transformers/pull/21066
| 1,526,118,095
|
PR_kwDOCUB6oc5HATSd
| 21,066
|
Update docstring for CLIPConfig
|
{
"login": "yingzha",
"id": 8920116,
"node_id": "MDQ6VXNlcjg5MjAxMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8920116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yingzha",
"html_url": "https://github.com/yingzha",
"followers_url": "https://api.github.com/users/yingzha/followers",
"following_url": "https://api.github.com/users/yingzha/following{/other_user}",
"gists_url": "https://api.github.com/users/yingzha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yingzha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yingzha/subscriptions",
"organizations_url": "https://api.github.com/users/yingzha/orgs",
"repos_url": "https://api.github.com/users/yingzha/repos",
"events_url": "https://api.github.com/users/yingzha/events{/privacy}",
"received_events_url": "https://api.github.com/users/yingzha/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, you will need to refresh your circleCI permissions and push an empty commit so we can check the tests are passing.",
"@sgugger Done."
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes missing imports in CLIPConfig docstring.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21066/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21066",
"html_url": "https://github.com/huggingface/transformers/pull/21066",
"diff_url": "https://github.com/huggingface/transformers/pull/21066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21066.patch",
"merged_at": 1673443347000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21065
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21065/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21065/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21065/events
|
https://github.com/huggingface/transformers/pull/21065
| 1,525,832,025
|
PR_kwDOCUB6oc5G_U0b
| 21,065
|
Fixed resize_token_embedding issue #21053
|
{
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @gante and @Rocketknight1 ",
"@sgugger feel free to merge if you approve. As I wrote above, other models have a similar problem (which require a more elaborate fix)",
"@susnato can you remove the `Fixes https://github.com/huggingface/transformers/issues/21053` at the top? That way, the issue stays open and I'll likely won't forget to fix the other models :)",
"> @susnato can you remove the `Fixes https://github.com/huggingface/transformers/issues/21053` at the top? That way, the issue stays open and I'll likely won't forget to fix the other models :)\r\n\r\nHi, @gante I removed the line...is it ok now?",
"> This is absolutely correct. `self.vocab_size` can easily get stale when the vocabulary gets updated, and the check should be done against the config.\r\n> \r\n> (there are other models with this issue, where the fix needs to be slightly different, so I'll have a look very soon)\r\n\r\nHi, @gante if you want, I would be happy to look into this and fix if I can.",
"@susnato sounds good!\r\n\r\nMy plan consists in removing all references to `self.vocab_size`, deleting the variable whenever it is a variable that is set at `__init__` time from the `config` (if needed, store the `config` in `self.config` instead, since it will hold the mutable vocabulary size).\r\n\r\nIf you search for \"tf.cast(self.vocab_size\", you will find all matches that will likely have to be touched.",
"> @susnato sounds good!\r\n> \r\n> My plan consists in removing all references to `self.vocab_size`, deleting the variable whenever it is a variable that is set at `__init__` time from the `config` (if needed, store the `config` in `self.config` instead, since it will hold the mutable vocabulary size).\r\n> \r\n> If you search for \"tf.cast(self.vocab_size\", you will find all matches that will likely have to be touched.\r\n\r\nHi @gante I am going to check for all models in `src/transformers/models/modeling_tf_<model>.py` to remove references of self.vocab_size and also I found some references of self.vocab_size in some of the `<model>MLMHead`, I need to change them too right? ",
"@susnato yes. If we look at the corresponding PT implementation e.g. for Albert, the layer classes store `self.config = config` for future use, as opposed to individual attributes of `config`. Making the switch here protects us from errors like the one that originated this PR :)"
] | 1,673
| 1,706
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
There was a typo in Line 449 in [huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_tf_gpt2.py](https://github.com/huggingface/transformers/blob/48d4e147d824efab97637947709d5aa67c809b3d/src/transformers/models/gpt2/modeling_tf_gpt2.py#L449) where the code was doing a check between input_ids and self.vocab_size but resize_token_embeddings change self.config.vocab_size so we were getting the error described in the issue, to overcome this I replaced it with self.config.vocab_size and it worked.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21065/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21065",
"html_url": "https://github.com/huggingface/transformers/pull/21065",
"diff_url": "https://github.com/huggingface/transformers/pull/21065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21065.patch",
"merged_at": 1673877996000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21064
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21064/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21064/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21064/events
|
https://github.com/huggingface/transformers/issues/21064
| 1,525,661,778
|
I_kwDOCUB6oc5a78BS
| 21,064
|
Preserving gradient flow through Clip Processor
|
{
"login": "ErwannMillon",
"id": 18487334,
"node_id": "MDQ6VXNlcjE4NDg3MzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/18487334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErwannMillon",
"html_url": "https://github.com/ErwannMillon",
"followers_url": "https://api.github.com/users/ErwannMillon/followers",
"following_url": "https://api.github.com/users/ErwannMillon/following{/other_user}",
"gists_url": "https://api.github.com/users/ErwannMillon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErwannMillon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErwannMillon/subscriptions",
"organizations_url": "https://api.github.com/users/ErwannMillon/orgs",
"repos_url": "https://api.github.com/users/ErwannMillon/repos",
"events_url": "https://api.github.com/users/ErwannMillon/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErwannMillon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker and @amyeroberts ",
"Sorry, here's a properly formatted snippet :) These are just a few of the transformations in the hf implementation, but would be happy to properly implement all of the transforms in the CLIPImageProcessor class\r\n```\r\nclass ProcessorGradientFlow():\r\n \"\"\"\r\n This wraps the huggingface CLIP processor to allow backprop through the image processing step.\r\n The original processor forces conversion to numpy then PIL images, which is faster for image processing but breaks gradient flow. \r\n \"\"\"\r\n def __init__(self, device=\"cuda\") -> None:\r\n self.device = device\r\n self.processor = CLIPProcessor.from_pretrained(\"openai/clip-vit-large-patch14\")\r\n self.image_mean = [0.48145466, 0.4578275, 0.40821073]\r\n self.image_std = [0.26862954, 0.26130258, 0.27577711]\r\n self.normalize = torchvision.transforms.Normalize(\r\n self.image_mean,\r\n self.image_std\r\n )\r\n self.resize = torchvision.transforms.Resize(224)\r\n self.center_crop = torchvision.transforms.CenterCrop(224)\r\n def preprocess_img(self, images):\r\n images = self.center_crop(images)\r\n images = self.resize(images)\r\n images = self.center_crop(images)\r\n images = self.normalize(images)\r\n return images\r\n def __call__(self, images=[], **kwargs):\r\n processed_inputs = self.processor(**kwargs)\r\n processed_inputs[\"pixel_values\"] = self.preprocess_img(images)\r\n processed_inputs = {key:value.to(self.device) for (key, value) in processed_inputs.items()}\r\n return processed_inputs\r\n\r\n```\r\n\r\n",
"Hi @ErwannMillon, thanks for raising this issue! \r\n\r\nUnfortunately, you're right and the gradient flow won't be preserved when passing images through the image processor. This will occur even if the images aren't cast to `PIL.Image.Image` i.e. if `do_resize=False`, as all input images are converted to numpy arrays. This is to ensure all supported inputs (PIL images, and numpy, tensorflow, jax and pytorch arrays) are processed in the same way. \r\n\r\nTraining VQGAN CLIP is a great use case for our CLIP models and seems like a good fit for a [research project example](https://github.com/huggingface/transformers/tree/main/examples/research_projects). If you would like to contribute this we'd be very happy to have it added to the repo and review any PRs. ",
"Great, thanks for getting back to me. Would be happy to work on this in my spare time and submit a PR. \r\n\r\nBut just to be clear, would you just be interested in having a VQGAN-CLIP specific research project that works around the issue with the HF Processor class, or a pull request that also modifies this class directly? (for example, with a preserve_gradient or convert_to_pil parameter that would use the torchvision transforms)",
"For the VQGAN-CLIP, I already have this repo that uses the HF clip model: \r\nhttps://github.com/ErwannMillon/Simple-VQGAN-CLIP\r\n\r\nI can clean this up some more to get it to the standard of the other projects in the research project examples you sent me, but was just wondering if you would be interested in extending the CLIPProcessor class",
"> Would be happy to work on this in my spare time and submit a PR.\r\n\r\nGreat! Excited to have this added to the repo and seeing the PR :) \r\n\r\n> would you just be interested in having a VQGAN-CLIP specific research project that works around the issue with the HF Processor class, or a pull request that also modifies this class directly? \r\n\r\nA specific research project that works around the issue. For the processor class, you can choose what that looks like within the research project i.e. is it completely independent or an extension. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### Feature request
Hi,
I was using the HF CLIP implementation to build a VQGAN CLIP Implementation and noticed that the CLIPProcessor forces conversion to PIL Images for efficiency. However, when inputting torch tensor images to the processor, this breaks gradient flow.
### Motivation
Would like to be able to backpropagate through CLIP image processing steps
### Your contribution
This was my quick and hacky fix, using torchvision to do the same transformations and processing steps. I'd be happy to properly code up a better equivalent and submit a pull request if you think this is a feature worth adding.
```
class ProcessorGradientFlow():
"""
This wraps the huggingface CLIP processor to allow backprop through the image processing step.
The original processor forces conversion to numpy then PIL images, which is faster for image processing but breaks gradient flow.
"""
def __init__(self, device="cuda") -> None:
self.device = device
self.processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
self.image_mean = [0.48145466, 0.4578275, 0.40821073]
self.image_std = [0.26862954, 0.26130258, 0.27577711]
self.normalize = torchvision.transforms.Normalize(
self.image_mean,
self.image_std
)
self.resize = torchvision.transforms.Resize(224)
self.center_crop = torchvision.transforms.CenterCrop(224)
def preprocess_img(self, images):
images = self.center_crop(images)
images = self.resize(images)
images = self.center_crop(images)
images = self.normalize(images)
return images
def __call__(self, images=[], **kwargs):
processed_inputs = self.processor(**kwargs)
processed_inputs["pixel_values"] = self.preprocess_img(images)
processed_inputs = {key:value.to(self.device) for (key, value) in processed_inputs.items()}
return processed_inputs
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21064/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21063
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21063/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21063/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21063/events
|
https://github.com/huggingface/transformers/pull/21063
| 1,525,315,969
|
PR_kwDOCUB6oc5G9jzF
| 21,063
|
[WIP] [Whisper] Add specaugment
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Seems like a very needed feature! what is the status? was this functionality tested? ",
"And as mentioned by @samuelazran we should add at least one test, if possible comparing with the original masking (if openAI added it to their codebase) otherwise an integration test.",
"I was waiting for the validation of basic functions to continue the further work. Thanks for the comments! Will finish the rest\r\n",
"Hi @ArthurZucker, do you have any suggestions of how to differentiate train and validation/test sets in order to only augment train set ?\r\n\r\nIn my mind, we perhaps need to add SpecAugment related parameters to the `__call__` function of `WhisperFeatureExtractor`, then update training example script here https://github.com/huggingface/transformers/blob/2411f0e465e761790879e605a4256f3d4afb7f82/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py#L428-L447\r\n\r\nto \r\n\r\n```python\r\ndef prepare_dataset(batch, **kwargs):\r\n # process audio\r\n sample = batch[audio_column_name]\r\n inputs = feature_extractor(sample[\"array\"], sampling_rate=sample[\"sampling_rate\"], **kwargs)\r\n # process audio length\r\n batch[model_input_name] = inputs.get(model_input_name)[0]\r\n batch[\"input_length\"] = len(sample[\"array\"])\r\n\r\n # process targets\r\n input_str = batch[text_column_name].lower() if do_lower_case else batch[text_column_name]\r\n batch[\"labels\"] = tokenizer(input_str).input_ids\r\n return batch\r\n\r\n\r\nwith training_args.main_process_first(desc=\"dataset map pre-processing\"):\r\n vectorized_datasets = DatasetDict()\r\n\r\n if training_args.do_train:\r\n # NB: also add SpecAugment parameters to DataTrainingArguments\r\n vectorized_datasets[\"train\"] = raw_datasets[\"train\"].map(\r\n lambda example: prepare_dataset(\r\n example,\r\n apply_spec_augment=data_args.apply_spec_augment,\r\n mask_time_prob=data_args.mask_time_prob,\r\n mask_feature_prob=data_args.mask_feature_prob,\r\n ),\r\n remove_columns=next(iter(raw_datasets.values())).column_names,\r\n num_proc=data_args.preprocessing_num_workers,\r\n desc=\"preprocess train dataset\",\r\n )\r\n\r\n if training_args.do_eval:\r\n vectorized_datasets[\"eval\"] = raw_datasets[\"eval\"].map(\r\n prepare_dataset,\r\n remove_columns=next(iter(raw_datasets.values())).column_names,\r\n num_proc=data_args.preprocessing_num_workers,\r\n desc=\"preprocess eval dataset\",\r\n )\r\n```\r\n\r\nAlso cc @sanchit-gandhi :)",
"I think I am in favor of just adding the `do_spec_augment` argument in the call of the feature extractor, which will default to `False`. The processing of training and validation should indeed be taken care of outside of the modelling.",
"Hey @bofenghuang,\r\n\r\nReally cool to see this new feature addition for SpecAug! Could well provide a nice boost for Whisper fine-tuning π\r\n\r\nNot sure I fully agree that we should add SpecAug to the feature extractor. IMO it's a regularisation technique that belongs in the modelling file which is in many ways analogous to dropout (we wouldn't ever add dropout to the feature extractor - this is a method that relates to the modelling code and thus we add it there).\r\n\r\nAdding SpecAug to the feature extractor causes two problems:\r\n1. We pre-process our training dataset once at the start of training to obtain our log-Mel spectrograms. Using SpecAug in our feature extractor means that we generate a **fixed set** of masked features in these spectrograms. If we train for multiple epochs, we re-use our pre-processed dataset, and so have the **same** masked features for each epoch. This is analogous to dropping out the same nodes each time we do dropout -> the model will fit to these fixed SpecAug features, defeating the point of using this regularisation technique! What we actually want to do is mask **different** features in our spectrograms each time we use the data, i.e. mask in a stochastic way. \r\n2. We need different pre-processing logic for our train/eval sets. We need to 'turn on' SpecAug for the train set and 'turn off' SpecAug for the eval set. \r\n\r\nBoth of these problems are bypasses by putting SpecAug in the modelling file:\r\n1. We mask a different set of features at each forward pass in a stochastic way ('true' form of dropout)\r\n2. We only apply SpecAug when we train, which we can access with the attribute `self.training`. See: https://github.com/huggingface/transformers/blob/f0fc7912980234f3711b261f13b4e77fa7a43fb5/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1253-L1254\r\n\r\nSo if it's ok with you I think we should modify this PR to move the SpecAug logic to the modelling file!",
"Oh I see thanks for thinking this far @sanchit-gandhi ! You are indeed right ππ» Sorry @bofenghuang for missleading you π
",
"Hi @sanchit-gandhi,\r\n\r\nThanks and totally agree with you! I've put it in the feature extractor just because it's a numpy version. I think we perhaps need to re-write it to pytorch if we want to have it in modeling? cc @ArthurZucker ",
"Think we can apply the same logic that we do in Wav2Vec2 and compute the mask using NumPy (no matmuls here, simply building a binary array of indices to mask/not mask in a stochastic way) and apply the mask in PyTorch to our tensors (hidden states).\r\n\r\nSo `_compute_mask_indices` is NumPy:\r\nhttps://github.com/huggingface/transformers/blob/071529bd548c52b27d3a3d9414db086692b37d2f/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L133\r\n\r\nAnd `_mask_hidden_states` PyTorch:\r\nhttps://github.com/huggingface/transformers/blob/071529bd548c52b27d3a3d9414db086692b37d2f/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1232\r\n\r\nYou can probably copy these two methods directly from `modeling_wav2vec2.py` and apply the masking as required to the `input_features` in Whisper!"
] | 1,673
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hi @ArthurZucker π,
As discussed in another conversation, in this PR I try to add [SpecAugment](https://arxiv.org/abs/1904.08779) to whisper models. It was used as one of regularization methods to train the `large-v2` model (https://github.com/openai/whisper/discussions/661).
Here the `SpecAugment` is implemented into `WhisperFeatureExtractor` in numpy. It masks the computed fbank features along the time and the feature axis.
Here are the steps in my mind. Please correct me if I miss something.
- [x] Return `attention_mask` by `pad` function to get the actual input lengths in the batch. And rescale it from sample level to feature level (48000 -> 3000)
- [x] Copy `_compute_mask_indices` function of wav2vec2, which will be used to generate masks
- [x] Add `_mask_input_features` function to mask along time or feature axis
- [ ] Add `apply_spec_augment`, `mask_time_prob`, etc to config and `__call__` function
It's still in draft. I will add the parameters to config and fix the test errors later :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21063/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21063",
"html_url": "https://github.com/huggingface/transformers/pull/21063",
"diff_url": "https://github.com/huggingface/transformers/pull/21063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21063.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21062
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21062/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21062/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21062/events
|
https://github.com/huggingface/transformers/pull/21062
| 1,525,179,686
|
PR_kwDOCUB6oc5G9GAA
| 21,062
|
Fixed low_cpu_mem_usage issue #21039
|
{
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, @sgugger I solved the `check_code_quality` test but the `tests_torch` is still giving me error, I ran the whole test locally which was failing before(by checking from this [link](https://app.circleci.com/pipelines/github/huggingface/transformers/55152/workflows/03ea844b-2a40-4142-ab12-f7378c66ea5f/jobs/665222)) and also ran the specific test locally(tests/models/auto/test_modeling_auto.py) which was causing the error, both seem to run perfectly fine in my local system. (I also updated the environment before running them locally).\r\n\r\nWould you please look in this matter? I can't seem to find the problem why tests are failing..... ",
"Hi, @sgugger I did all the changes you mentioned, and all the checks are successful now. "
] | 1,673
| 1,706
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21039
When using AutoModelForCausalLM.from_pretrained(..., low_cpu_mem_usage=True), for some models (with modified configs) are having problems loading their weights from the model_state_dict, this PR solves that.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21062/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21062",
"html_url": "https://github.com/huggingface/transformers/pull/21062",
"diff_url": "https://github.com/huggingface/transformers/pull/21062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21062.patch",
"merged_at": 1673514194000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21061
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21061/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21061/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21061/events
|
https://github.com/huggingface/transformers/issues/21061
| 1,525,131,814
|
I_kwDOCUB6oc5a56om
| 21,061
|
Force_download=True not working, `No such file or directory: './.cache/models--togethercomputer--GPT-JT-6B-v1/refs/main'`
|
{
"login": "bhavnicksm",
"id": 11348086,
"node_id": "MDQ6VXNlcjExMzQ4MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/11348086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavnicksm",
"html_url": "https://github.com/bhavnicksm",
"followers_url": "https://api.github.com/users/bhavnicksm/followers",
"following_url": "https://api.github.com/users/bhavnicksm/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavnicksm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavnicksm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavnicksm/subscriptions",
"organizations_url": "https://api.github.com/users/bhavnicksm/orgs",
"repos_url": "https://api.github.com/users/bhavnicksm/repos",
"events_url": "https://api.github.com/users/bhavnicksm/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavnicksm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @bhavnicksm can you provide more information about your system (by using `transformers-cli env` in terminal) so that it will be easier to reproduce the code? ",
"Can you also tell us if there is folder `./.cache/` where you execute this code? Reading the error, it might simply be because the cache folder was not properly created.",
"@sgugger The cache folder gets created properly and there's the folder of that name, but the folder is not populated with any files. It's just empty.",
"Hi @susnato updated the original issue with the relevant information. Thanks for the suggestion to use `transformers-cli`. ",
"I ran your code snippet and cannot reproduce your issue. I also don't understand why that snippet of code should download anything anew and not look at the cache.",
"@sgugger the issue has been resolved, thanks for looking into it! π«\n\nI believe it was a connection issue with the servers because even SentenceTransformers was giving a error but something about how it couldn't connect to HF.\n\nThey both started to work at the same time a few hours later. \n\nAbout the logic in the reproduction code, the default cache path wasn't working so providing another cache path with `force_download=True` might make it download again. Nevermind since it's been resolved π"
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-57-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@younesbelkada @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Got the error while running the following command:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/GPT-JT-6B-v1", cache_dir='./.cache/')
```
### Expected behavior
Should download the model anew rather than looking in the cache
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21061/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21060
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21060/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21060/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21060/events
|
https://github.com/huggingface/transformers/pull/21060
| 1,525,096,973
|
PR_kwDOCUB6oc5G8z7S
| 21,060
|
add GPTSAN model
|
{
"login": "tanreinama",
"id": 51933889,
"node_id": "MDQ6VXNlcjUxOTMzODg5",
"avatar_url": "https://avatars.githubusercontent.com/u/51933889?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanreinama",
"html_url": "https://github.com/tanreinama",
"followers_url": "https://api.github.com/users/tanreinama/followers",
"following_url": "https://api.github.com/users/tanreinama/following{/other_user}",
"gists_url": "https://api.github.com/users/tanreinama/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanreinama/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanreinama/subscriptions",
"organizations_url": "https://api.github.com/users/tanreinama/orgs",
"repos_url": "https://api.github.com/users/tanreinama/repos",
"events_url": "https://api.github.com/users/tanreinama/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanreinama/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21060). All of your documentation changes will be reflected on that endpoint.",
"generate() now works with greedy_gen_mode, but I want contrastive_search to be the default. Is there any reference code somewhere for that?",
"Yes, contrastive search should be supported in `transformers` but I think you need to tweak the caching mechanism. Maybe @gante can help here as I am not really sure π ",
"ok. I was confused about contrastive_search. This will work almost fine.\r\n```\r\nmodel.config.use_cache=True\r\nmodel.config.do_sample=True\r\nc = model.generate(x_tok, logits_processor=LogitsProcessorList([TopKLogitsWarper(120)]))\r\n```\r\n\r\nI would like to override _get_logits_processor and add TopKLogitsWarper to the default logs_processor.\r\n```\r\nlogits_processor = super()._get_logits_processor(...)\r\nif generation_config.top_k is not None:\r\n logits_processor.append(TopKLogitsWarper(generation_config.top_k))\r\nreturn logits_processor\r\n```\r\n\r\nThere was also a misunderstanding about the caching mechanism. I thought that cache saves everything up to the last time, and that SequenceLength is 1 every time forward is called, but it seems that's not the case. I can make it compatible.",
"@tanreinama @younesbelkada contrastive search _should_ work out of the box if the model uses the usual caching mechanism. Prefix LM models are not the case, sadly (it's probably the same issue as GIT, which is also a prefix LM model) π
\r\n\r\nI'd suggest to skip contrastive search for now, and to fix it in a subsequent PR (skip = skip tests and override `contrastive_search` such that an informative exception is thrown). I should be able to give better advice after I see what's happening with GIT :)",
"@ArthurZucker @younesbelkada\r\nI have committed some updates in response to your comments.\r\n\r\nIn unit tests, the `wav2vec2_with_lm` module is causing errors. Is this due to conflicts? I didn't touch the this package...",
"The failing test is not related to you! If you pull from main it might get resolved! However you have `FAILED tests/models/gptsan_japanese/test_modeling_gptsan_japanese.py::GPTSANJapaneseForConditionalGenerationTest::test_logits - Failed: Timeout >120.0s` which means either the test should be marked as `#slow` or there is an issue int his test ππ» ",
"I sync and pull from main so it closed automatically. I'll create new PR from merged code. thx,"
] | 1,673
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# Model description
GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM in the T5 paper, and works with both Test Generation and Masked Language Model.
To add this model to the transformer, I did the following:
Porting GPTSAN to PyTorch. Model conversion. Creating model cards in HuggingFace Hub. Porting generation code.
The model card has already been uploaded. (https://huggingface.co/Tanrei/GPTSAN-japanese/)
Tokenizer uses GPT-NeoX-Japanese, and only new vocabulary files are uploaded to the model card. Minor differences are absorbed within the generation algorithm in the model's source code.
GPTSAN repository is:
https://github.com/tanreinama/GPTSAN
Discussion of HuggingFace integration is:
https://github.com/tanreinama/GPTSAN/issues/2
Thanks to: @ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21060/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21060/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21060",
"html_url": "https://github.com/huggingface/transformers/pull/21060",
"diff_url": "https://github.com/huggingface/transformers/pull/21060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21060.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21059
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21059/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21059/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21059/events
|
https://github.com/huggingface/transformers/issues/21059
| 1,525,045,374
|
I_kwDOCUB6oc5a5lh-
| 21,059
|
Can the transformer models run without any local storage at all?
|
{
"login": "varnlp",
"id": 85830853,
"node_id": "MDQ6VXNlcjg1ODMwODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/85830853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varnlp",
"html_url": "https://github.com/varnlp",
"followers_url": "https://api.github.com/users/varnlp/followers",
"following_url": "https://api.github.com/users/varnlp/following{/other_user}",
"gists_url": "https://api.github.com/users/varnlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varnlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varnlp/subscriptions",
"organizations_url": "https://api.github.com/users/varnlp/orgs",
"repos_url": "https://api.github.com/users/varnlp/repos",
"events_url": "https://api.github.com/users/varnlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/varnlp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is indeed to supported by the library.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,676
| 1,676
|
NONE
| null |
### Feature request
We have a use case where we'd like to download transformer models from our S3 or other storage location directly into memory (without saving it in local storage), finetune the model and save the final model directly to the remote storage through an API.
We're wondering if this use case of not using local storage at all is possible using the current library?
### Motivation
Our usecase requires minimizing local storage usage.
### Your contribution
We are trying to figure out if this feature is already supported
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21059/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21058
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21058/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21058/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21058/events
|
https://github.com/huggingface/transformers/issues/21058
| 1,524,904,856
|
I_kwDOCUB6oc5a5DOY
| 21,058
|
`rank = dist.get_rank()` throws `group error` while loading model with running `AutoModelForSeq2SeqLM.from_pretrained` using deepspeed
|
{
"login": "SoundProvider",
"id": 48939336,
"node_id": "MDQ6VXNlcjQ4OTM5MzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/48939336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoundProvider",
"html_url": "https://github.com/SoundProvider",
"followers_url": "https://api.github.com/users/SoundProvider/followers",
"following_url": "https://api.github.com/users/SoundProvider/following{/other_user}",
"gists_url": "https://api.github.com/users/SoundProvider/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoundProvider/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoundProvider/subscriptions",
"organizations_url": "https://api.github.com/users/SoundProvider/orgs",
"repos_url": "https://api.github.com/users/SoundProvider/repos",
"events_url": "https://api.github.com/users/SoundProvider/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoundProvider/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"That traceback doesn't look like an issue in the HF integration, indeed could you try some more recent pytorch first? You're not even using a released version of 1.8, but some nightly/rc version (`torch version .................... 1.8.0a0+1606899`). I'd try 1.12 or 1.13 (latest).\r\n\r\nPlease let me know if it doesn't help and I will try to reproduce it.",
"> That traceback doesn't look like an issue in the HF integration, indeed could you try some more recent pytorch first? You're not even using a released version of 1.8, but some nightly/rc version (`torch version .................... 1.8.0a0+1606899`). I'd try 1.12 or 1.13 (latest).\r\n> \r\n> Please let me know if it doesn't help and I will try to reproduce it.\r\n\r\nI've changed the environment to following setting\r\n- `transformers` version: 4.22.2\r\n- Platform: Linux-4.19.93-1.nbp.el7.x86_64-x86_64-with-glibc2.10\r\n- Python version: 3.8.13\r\n- Huggingface_hub version: 0.11.0\r\n- PyTorch version (GPU?): 1.13.0a0+08820cb (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n\r\nAnd I succeeded running the script:) Thank you\r\nAlthough, I've noticed that when using deepspeed with huggingface Trainer, Training info gives `Number of trainable parameters` as zero\r\n```\r\n[INFO|trainer.py:1643] 2023-01-10 04:55:31,025 >> ***** Running training *****\r\n[INFO|trainer.py:1644] 2023-01-10 04:55:31,025 >> Num examples = 10\r\n[INFO|trainer.py:1645] 2023-01-10 04:55:31,025 >> Num Epochs = 3\r\n[INFO|trainer.py:1646] 2023-01-10 04:55:31,025 >> Instantaneous batch size per device = 8\r\n[INFO|trainer.py:1647] 2023-01-10 04:55:31,025 >> Total train batch size (w. parallel, distributed & accumulation) = 16\r\n[INFO|trainer.py:1648] 2023-01-10 04:55:31,025 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1649] 2023-01-10 04:55:31,025 >> Total optimization steps = 3\r\n[INFO|trainer.py:1650] 2023-01-10 04:55:31,028 >> Number of trainable parameters = 0\r\n```\r\n\r\nThe same issue is here(https://discuss.huggingface.co/t/deepspeed-with-trainer-no-of-trainable-parameters-coming-to-be-0/27187).\r\nThank you in advance @stas00:) I really appreciate your kindness",
"great to hear that it worked, @SoundProvider \r\n\r\n> Although, I've noticed that when using deepspeed with huggingface Trainer, Training info gives Number of trainable parameters as zero\r\n\r\nThat means that the params weren't gathered under zero3. and when zero3 is used deepspeed puts placeholders with tensors of zero3. Please create a new issue and I will fix it. or if you feel inspired you can contribute a few lines of code that will check if the model is running under deepspeed and gather the params. It'd be something like this:\r\n\r\n```\r\n if is_deepspeed_zero3_enabled():\r\n import deepspeed\r\n size = 0\r\n for param in model.parameters():\r\n with deepspeed.zero.GatheredParameters(param, modifier_rank=None):\r\n size += param.numel()\r\n```\r\n\r\nwe do it one param at a time to avoid loading a potentially huge model onto cpu.",
"I'd love to try it out.\r\nI will go through some the codes and make a new issue if I find a way:)\r\nThank you",
"Wonderful!\r\n\r\nThe other service we could provide to future users is to find out which minimal pt version is required to make it work and assert if it's not the right one - in case you're interested to explore that one - but by all means this is only an invitation, please feel no pressure to do anything unless it gives you goosebumps when you think of doing it.",
"@stas00 \r\nHello Stas. I've tested running two different models with both deepspeed and torch DDP. As you can see below, t5-large with deepspeed uses much less GPU memory than torch DDP, while OPT model with deepspeed doesn't show useful decrease.\r\nI've looked through deepspeed codes couldn't find any hints,,\r\nI have 2 questions\r\n- What would cause the differenct GPU memory decrease between two models? From what I've understood from [deepspeed.initialize](https://github.com/microsoft/DeepSpeed/blob/fe728e3ed880f27de2c21234f12b7aa6f672e825/deepspeed/runtime/pipe/engine.py#L138), deepspeed handles only tensors, not model blocks.\r\n- After what I read from[ deepspeed memory efficiency](https://www.deepspeed.ai/training/#memory-efficiency), I expected t5-large with deepspeed would show much more GPU memory decrease than I tested. Could you tell me any hint?\r\nThank you beforehand for your great works\r\n\r\n\r\n### Experiments\r\n1. t5-large\r\n - deepspeed\r\n - script: `rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 \\\r\n examples/pytorch/translation/run_translation.py --model_name_or_path t5-large \\\r\n --output_dir output_dir --overwrite_output_dir --max_source_length 128 \\\r\n --max_target_length 128 --val_max_target_length 128 --do_train \\\r\n --num_train_epochs 1 --learning_rate 3e-3 \\\r\n --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \\\r\n --source_prefix 'translate English to Romanian: ' --max_train_samples 5 \\\r\n --deepspeed tests/deepspeed/ds_config_zero3_NSML_test.json --per_device_train_batch_size 1`\r\n - \r\n - torch DDP\r\n - script: `python -m torch.distributed.launch --nproc_per_node=2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-large \\\r\n--output_dir output_dir --overwrite_output_dir --max_source_length 128 \\\r\n--max_target_length 128 --val_max_target_length 128 --do_train \\\r\n--num_train_epochs 10 --learning_rate 3e-3 \\\r\n--dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \\\r\n--source_prefix 'translate English to Romanian: ' --max_train_samples 5 --per_device_train_batch_size 1`\r\n - \r\n\r\n2. OPT\r\n - model: [link](https://github.com/huggingface/transformers/tree/main/src/transformers/models/opt), version: 4.26.0.dev0\r\n - used [OPTForCausalLM](https://github.com/huggingface/transformers/blob/2411f0e465e761790879e605a4256f3d4afb7f82/src/transformers/models/opt/modeling_opt.py#L808), with custom dataset\r\n - deepspeed\r\n - script: `rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 \\\r\nrun_opt.py --model_name_or_path facebook/opt-1.3b --output_dir test \\\r\n--deepspeed ../tests/deepspeed/ds_config_zero3_NSML_test.json --do_train True --do_eval True \\\r\n--per_device_train_batch_size 1`\r\n - \r\n - torch DDP\r\n - script: `rm -r test; python -m torch.distributed.launch --nproc_per_node=2 run_opt.py --model_name_or_path facebook/opt-1.3b --output_dir test \\\r\n--do_train True --do_eval True --per_device_train_batch_size 1`\r\n - \r\n\r\n\r\n#### env info\r\n- `transformers` version: 4.26.0.dev0\r\n- Platform: Linux-4.19.93-1.nbp.el7.x86_64-x86_64-with-glibc2.10\r\n- Python version: 3.8.13\r\n- Huggingface_hub version: 0.11.0\r\n- PyTorch version (GPU?): 1.13.0a0+08820cb (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n",
"I won't trust `nvidia-smi` for measuring memory usage patterns, as it is not aware of cuda caching and you can't see peak memory usage either.\r\n\r\nYou can repeat the above runs, but add `--skip_memory_metrics 0` and it'll print you all the memory usage stats at the end of each run. (only use this for debug as it slows training down)\r\n\r\nI'm not saying that you still won't see an issue, but I'm asking to do that as it'd give us a much precise memory usage stats.\r\n\r\nand ideally please make it into a new Issue and let's close this one. As this discussion is now totally unrelated to the topic of this Issue.\r\n\r\nThanks.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,676
| 1,676
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-3.10.0-514.26.2.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.8.0a0+1606899 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes(deepseed)
### Who can help?
@stas00
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm trying to test running `examples/pytorch/translation/run_translation.py` with `deepspeed`, using this [example](https://github.com/huggingface/transformers/issues/17534#issuecomment-1146249686) @stas00 had written (thanks beforehand)
- script I've run
```
rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 4 \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
--output_dir output_dir --overwrite_output_dir --max_source_length 128 \
--max_target_length 128 --val_max_target_length 128 --do_train \
--num_train_epochs 1 --learning_rate 3e-3 \
--dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \
--source_prefix 'translate English to Romanian: ' --max_train_samples 5 \
--deepspeed tests/deepspeed/ds_config_zero3_test.json --save_steps 5
```
- `ds_config_zero3_test.json`
- I changed `gradient_accumulation_steps, train_batch_size, train_micro_batch_size_per_gpu` values from `auto` to some int values, since `auto` value threw out error such as `check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu * gradient_acc_step * world_size 8 != 2 * 1 * 1`
```
%%bash
cat <<'EOT' > ds_config_zero3_test.json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": 2,
"train_micro_batch_size_per_gpu": 2,
"wall_clock_breakdown": false
}
EOT
```
- ERROR message
- An error occurs while loading model, getting rank info using `torch.distributed.get_rank`, throwing `RuntimeError: The given group does not exist`
- Maybe it's because I'm using older version of PyTorch? (`1.8.0a0+1606899`).
```
[2023-01-09 00:34:14,932] [INFO] [partition_parameters.py:709:__init__] _all_gather_base API is not available in torch 1.8.0a0+1606899
Traceback (most recent call last):
File "examples/pytorch/translation/run_translation.py", line 660, in <module>
main()
File "examples/pytorch/translation/run_translation.py", line 374, in main
model = AutoModelForSeq2SeqLM.from_pretrained(
File "/home/transformers/src/transformers/models/auto/auto_factory.py", line 463, in from_pretrained
return model_class.from_pretrained(
File "/home/transformers/src/transformers/modeling_utils.py", line 2299, in from_pretrained
with ContextManagers(init_contexts):
File "/home/transformers/src/transformers/utils/generic.py", line 359, in __enter__
self.stack.enter_context(context_manager)
File "/opt/conda/lib/python3.8/contextlib.py", line 425, in enter_context
result = _cm_type.__enter__(cm)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 400, in __enter__
print_rank_0(
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 49, in print_rank_0
rank = dist.get_rank()
File "/opt/conda/lib/python3.8/site-packages/deepspeed/comm/comm.py", line 575, in get_rank
return cdb.get_rank(group)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/comm/torch.py", line 175, in get_rank
return torch.distributed.get_rank(group=group)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 645, in get_rank
return _get_group_rank(group, _default_pg.rank())
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 191, in _get_group_rank
raise RuntimeError("The given group does not exist")
RuntimeError: The given group does not exist
```
- `ds_report`
```
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
[WARNING] please install triton==1.0.0 if you want to use sparse attention
sparse_attn ............ [NO] ....... [NO]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
utils .................. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
spatial_inference ...... [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/lib/python3.8/site-packages/torch']
torch version .................... 1.8.0a0+1606899
torch cuda version ............... 11.1
torch hip version ................ None
nvcc version ..................... 11.1
deepspeed install path ........... ['/opt/conda/lib/python3.8/site-packages/deepspeed']
deepspeed info ................... 0.7.7, unknown, unknown
deepspeed wheel compiled w. ...... torch 1.8, cuda 11.1
```
### Expected behavior
I've expected it to run with multiple gpus but not running
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21058/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21057
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21057/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21057/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21057/events
|
https://github.com/huggingface/transformers/issues/21057
| 1,524,747,181
|
I_kwDOCUB6oc5a4cut
| 21,057
|
Whisper decoding returns exception about outputs.logits shape
|
{
"login": "nshmyrev",
"id": 2886672,
"node_id": "MDQ6VXNlcjI4ODY2NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2886672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nshmyrev",
"html_url": "https://github.com/nshmyrev",
"followers_url": "https://api.github.com/users/nshmyrev/followers",
"following_url": "https://api.github.com/users/nshmyrev/following{/other_user}",
"gists_url": "https://api.github.com/users/nshmyrev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nshmyrev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nshmyrev/subscriptions",
"organizations_url": "https://api.github.com/users/nshmyrev/orgs",
"repos_url": "https://api.github.com/users/nshmyrev/repos",
"events_url": "https://api.github.com/users/nshmyrev/events{/privacy}",
"received_events_url": "https://api.github.com/users/nshmyrev/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"Hey! Could you provide a reproducing script with the dataset? The file might be corrupted.",
"To reproduce you can try this code\r\n\r\n```\r\n#!/usr/bin/env python3\r\n\r\nfrom transformers import WhisperProcessor, WhisperForConditionalGeneration\r\nimport torch\r\nimport torchaudio\r\n\r\nprocessor = WhisperProcessor.from_pretrained(\"mitchelldehaven/whisper-large-v2-ru\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"mitchelldehaven/whisper-large-v2-ru\")\r\n\r\nspeech_array, sampling_rate = torchaudio.load(\"test.wav\")\r\nresampler = torchaudio.transforms.Resample(sampling_rate, 16_000)\r\nsound = resampler(speech_array).squeeze().numpy()\r\ninput_features = processor(sound, return_tensors=\"pt\", sampling_rate=16_000).input_features\r\n\r\nwith torch.no_grad():\r\n generated_ids = model.generate(inputs=input_features, max_length=1000)\r\n transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]\r\n```\r\n\r\nwith the attached file\r\n\r\n[test.zip](https://github.com/huggingface/transformers/files/10386796/test.zip)\r\n\r\nThis thing happens with fine-tuned models between, not original ones.",
"I have the same issue. Model is not finetuned\r\n\r\nCould you find a workaround @nshmyrev ?",
"In this case, using an original model works:\r\n```python \r\nfrom transformers import WhisperProcessor, WhisperForConditionalGeneration\r\nimport torchaudio\r\nimport torch\r\n\r\nfn = \"/home/arthur_huggingface_co/transformers/Arthur/test.wav\"\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-large\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-large\")\r\nspeech_array, sampling_rate = torchaudio.load(fn)\r\nresampler = torchaudio.transforms.Resample(sampling_rate, 16_000)\r\nsound = resampler(speech_array).squeeze().numpy()\r\ninput_features = processor(sound, return_tensors=\"pt\", sampling_rate=16_000).input_features\r\n\r\nwith torch.no_grad():\r\n generated_ids = model.generate(inputs=input_features, max_length=1000)\r\ntranscription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]\r\nprint(transcription)\r\n```\r\n\r\nI get \r\n```python \r\n Duh duh duh duh uh huh.\r\n```\r\n\r\nWhen running with your model however, it seems that the `max_len` parameter is not taken into account, and the `input_ids` have a length of `449` which provokes the error. The model should stop. This can be caused because of various things, but I recommend setting the `max_length` to `448` as the model should not be fed with larger inputs. (it is the case for the original models. \r\n\r\n@RuABraun can you share the audio and a reproduction script? \r\n",
"I fixed it by lowering max_length. Thanks"
] | 1,673
| 1,675
| 1,675
|
NONE
| null |
### System Info
`transformers` version: 4.26.0.dev0
- Platform: Linux-5.10.0-20-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Same error on cuda servers
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run simple decoding with Whisper large:
```
speech_array, sampling_rate = torchaudio.load(fn)
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
sound = resampler(speech_array).squeeze().numpy()
input_features = processor(sound, return_tensors="pt", sampling_rate=16_000).input_features
with torch.no_grad():
generated_ids = model.generate(inputs=input_features, max_length=1000)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
Result is an exception:
```
Traceback (most recent call last):
File "/home/user/test_whisper_hf.py", line 37, in <module>
generated_ids = model.generate(inputs=input_features, max_length=1000)
File "/home/user/.local/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/user/.local/lib/python3.9/site-packages/transformers-4.26.0.dev0-py3.9.egg/transformers/generation/utils.py", line 1352, in generate
return self.greedy_search(
File "/home/user/.local/lib/python3.9/site-packages/transformers-4.26.0.dev0-py3.9.egg/transformers/generation/utils.py", line 2135, in greedy_search
next_token_logits = outputs.logits[:, -1, :]
IndexError: index -1 is out of bounds for dimension 1 with size 0
```
The output on this problematic file is
```
Seq2SeqLMOutput(loss=None, logits=tensor([], size=(1, 0, 51865)), past_key_values=((tensor([[[[ 1.3006e+00, -4.4066e-02, -2.5518e-02, ..., 1.6218e-01,
```
This happens only with a single file in the dataset of 10k files.
### Expected behavior
No exception
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21057/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21057/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21056
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21056/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21056/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21056/events
|
https://github.com/huggingface/transformers/issues/21056
| 1,524,735,343
|
I_kwDOCUB6oc5a4Z1v
| 21,056
|
I have a problem trained model with tensorflow on transformer pipeline male error
|
{
"login": "danial1995",
"id": 84047422,
"node_id": "MDQ6VXNlcjg0MDQ3NDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/84047422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danial1995",
"html_url": "https://github.com/danial1995",
"followers_url": "https://api.github.com/users/danial1995/followers",
"following_url": "https://api.github.com/users/danial1995/following{/other_user}",
"gists_url": "https://api.github.com/users/danial1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danial1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danial1995/subscriptions",
"organizations_url": "https://api.github.com/users/danial1995/orgs",
"repos_url": "https://api.github.com/users/danial1995/repos",
"events_url": "https://api.github.com/users/danial1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/danial1995/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You are using a Keras model here, but the `pipeline` can only deal with `TFPreTrainedModel`s (models of the Transformers library).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,676
| 1,676
|
NONE
| null |
<details><summary>Click to expand!</summary>
### Issue Type
Bug
### Have you reproduced the bug with TF nightly?
Yes
### Source
source
### Tensorflow Version
2.8
### Custom Code
Yes
### OS Platform and Distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/Compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current Behaviour?
iβm using this github text summarization and I have a problem. I have been struggling for two week and I could not figure that out.
im using a notebook from this github repository:
https://github.com/flogothetis/Abstractive-Summarization-T5-Keras
notebook link:
https://github.com/flogothetis/Abstractive-Summarization-T5-Keras/blob/main/AbstractiveSummarizationT5.ipynb
after train model i wanna use huggingface transformer pipe line to generate summerization
**from transformers import pipeline
summarizer = pipeline(βsummarizationβ, model=model, tokenizer=βt5-smallβ, framework=βtfβ)
summarizer(βsome textβ)**
but it pop out an error:
**AttributeError: βFunctionalβ object has no attribute 'configβ**
Anyone has any idea how can i solve it?
full error:
AttributeError Traceback (most recent call last)
/tmp/ipykernel_20/1872405895.py in
----> 1 summarizer = pipeline(βsummarizationβ, model=model, tokenizer=βt5-smallβ, framework=βtfβ)
2
3 summarizer(βThe US has passed the peak on new coronavirus cases, President Donald Trump said and predicted that some states would reopenβ)
/opt/conda/lib/python3.7/site-packages/transformers/pipelines/init.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, use_auth_token, model_kwargs, **kwargs)
432 break
433
β 434 return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs)
/opt/conda/lib/python3.7/site-packages/transformers/pipelines/text2text_generation.py in init(self, *args, **kwargs)
37
38 def init(self, *args, **kwargs):
β> 39 super().init(*args, **kwargs)
40
41 self.check_model_type(
/opt/conda/lib/python3.7/site-packages/transformers/pipelines/base.py in init(self, model, tokenizer, modelcard, framework, task, args_parser, device, binary_output)
548
549 # Update config with task specific parameters
β 550 task_specific_params = self.model.config.task_specific_params
551 if task_specific_params is not None and task in task_specific_params:
552 self.model.config.update(task_specific_params.get(task))
AttributeError: βFunctionalβ object has no attribute 'configβ
```
### Standalone code to reproduce the issue
```shell
summarizer = pipeline(βsummarizationβ, model=model, tokenizer=βt5-smallβ, framework=βtfβ)
summarizer(βsome textβ)
but it pop out an error:
AttributeError: βFunctionalβ object has no attribute 'configβ
```
### Relevant log output
_No response_</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21056/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21055
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21055/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21055/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21055/events
|
https://github.com/huggingface/transformers/pull/21055
| 1,524,711,039
|
PR_kwDOCUB6oc5G7eWV
| 21,055
|
Add Spanish translation to community.mdx
|
{
"login": "shogohida",
"id": 10365357,
"node_id": "MDQ6VXNlcjEwMzY1MzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/10365357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shogohida",
"html_url": "https://github.com/shogohida",
"followers_url": "https://api.github.com/users/shogohida/followers",
"following_url": "https://api.github.com/users/shogohida/following{/other_user}",
"gists_url": "https://api.github.com/users/shogohida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shogohida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shogohida/subscriptions",
"organizations_url": "https://api.github.com/users/shogohida/orgs",
"repos_url": "https://api.github.com/users/shogohida/repos",
"events_url": "https://api.github.com/users/shogohida/events{/privacy}",
"received_events_url": "https://api.github.com/users/shogohida/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds Spanish translation to community.mdx
Fixes #15947
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@osanseviero @omarespejel @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21055/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21055",
"html_url": "https://github.com/huggingface/transformers/pull/21055",
"diff_url": "https://github.com/huggingface/transformers/pull/21055.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21055.patch",
"merged_at": 1673684705000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21054
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21054/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21054/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21054/events
|
https://github.com/huggingface/transformers/issues/21054
| 1,524,694,741
|
I_kwDOCUB6oc5a4P7V
| 21,054
|
X-CLIP and other video classification models can't be loaded into CUDA GPU for inference without crashing the kernel/process
|
{
"login": "e-caste",
"id": 48513706,
"node_id": "MDQ6VXNlcjQ4NTEzNzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/48513706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-caste",
"html_url": "https://github.com/e-caste",
"followers_url": "https://api.github.com/users/e-caste/followers",
"following_url": "https://api.github.com/users/e-caste/following{/other_user}",
"gists_url": "https://api.github.com/users/e-caste/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-caste/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-caste/subscriptions",
"organizations_url": "https://api.github.com/users/e-caste/orgs",
"repos_url": "https://api.github.com/users/e-caste/repos",
"events_url": "https://api.github.com/users/e-caste/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-caste/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"After trying to revert the driver back from 525 to 515 and installing CUDA with other methods, such as by specifying `cuda_toolkit=11.7` in the `conda` installation (I originally just used `venv`), I've found out that my original configuration (under \"Other details\" above) worked if I did not import decord. \r\n\r\nThe issue is given by simply importing decord (`import decord` is sufficient) before trying to move the model to the GPU. Unfortunately, since decord is written in C at its core, the Python process simply segfaults without error. \r\nI'm now using pyAV and everything works as expected. I'm closing this issue and opening another one to ask for updated docs without decord, this has cost me a lot of debugging time for something simple but undocumented, so I hope to save that time to other users.",
"Hi @e-caste,\r\n\r\nThanks a lot for investigating. I'm not able to reproduce this in [Google Colab](https://colab.research.google.com/drive/1SMc0zW_zfp8j-iiasUh3CeJMY2i1NlPu?usp=sharing), which at the moment has PyTorch 1.13. I'm using the main branch of Transformers. The model seems correctly placed on the GPU (which I confirmed by running `nvidia-smi` and seeing whether the memory is occupied).\r\n\r\nI'll ping @nateraw here as he has been looking into several video decoding libraries, we should of course take one that works as intended. From [this thread](https://github.com/huggingface/datasets/issues/5225), we're currently in favor of using PyAV.",
"@NielsRogge I'm not sure if literally the `import decord` line is enough for the kernel to crash (I've tested a lot of things and I can't remember), but I'm sure that the import line in the docs (`from decord import VideoReader, cpu` -- I had it before `import torch` and `from transformers import XCLIPProcessor, XCLIPModel`) made it crash.\n\n"
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
### System Info
Originally:
- `transformers` version: 4.25.1 (also tried 4.26.0-dev directly from the GitHub main branch)
- Platform: Linux-6.0.12-76060006-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
Then, given [this comment](https://github.com/microsoft/VideoX/issues/57#issuecomment-1283627674) in the X-CLIP issues, I also tried:
- `transformers` version: 4.25.1
- Platform: Linux-6.0.12-76060006-generic-x86_64-with-glibc2.35
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.8.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@NielsRogge tagging you since you've added the code for X-CLIP to the library and also commented in the X-CLIP issue I've mentioned above.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. I have first copied [the example code](https://huggingface.co/docs/transformers/main/en/model_doc/xclip#transformers.XCLIPModel.forward.example) provided in the library documentation, which worked.
2. Then I've extended my notebook to process data from my (currently) private dataset, but still following exactly the example code. This is where I've noticed that the inference took a few seconds, so...
3. I have compiled [decord](https://github.com/dmlc/decord) from source, which allowed me to run the data processing on the GPU. This worked, but it didn't provide any performance improvement, so I reverted to the PyPI version.
4. I tried manually moving the model to the GPU, with `model.to("cuda")`, `model.to("cuda:0")`, `model.to(torch.device("cuda"))` and `model.cuda()`. All of these make the Jupyter Lab kernel crash with no error in the logs. If reloaded, the model still works, but only runs on CPU.
5. I also tried replacing `XClipModel` with other video classification models, such as [`TimesformerForVideoClassification`](https://huggingface.co/docs/transformers/main/en/model_doc/timesformer#transformers.TimesformerForVideoClassification). Since this model is not included in the stable release yet, I uninstalled transformers v4.25.1 and installed the current main branch (v4.26.0-dev). This still only ran on CPU and refused to work on GPU.
6. I have then found [this comment](https://github.com/microsoft/VideoX/issues/57#issuecomment-1283627674) about my exact problem in the microsoft/VideoX issues, saying they solved it by downgrading to PyTorch 1.8.0, which I did (from 1.13.0) after also downgrading Python (from 3.10 to 3.8 due to PyTorch compatibility). With this change, instantiating the model made the kernel crash immediately. My guess is that between PyTorch 1.8.0 and 1.13.0 a fallback to the CPU if the model couldn't be loaded into GPU was introduced.
Other details:
- Linux distro: Pop!_OS 22.04
- CPU: Ryzen 5 5600X
- GPU: NVIDIA RTX 3090
- RAM: 16GB (even though limited, the model which I'm trying to load (microsoft/xclip-base-patch16-zero-shot) should fit with no problem)
- NVIDIA driver 525.60.11
- CUDA 11.2 (installed with the `system76-cuda-latest` metapackage) -- even though `nvidia-smi` reports CUDA 12.0, could this be an issue?
### Expected behavior
The model should be loaded into the GPU automatically, like other models that currently work flawlessly for me such as BART. At least, manually moving the model to the GPU should work without segfaulting.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21054/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21053
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21053/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21053/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21053/events
|
https://github.com/huggingface/transformers/issues/21053
| 1,524,689,075
|
I_kwDOCUB6oc5a4Oiz
| 21,053
|
Token embedding resizing does not work for TFGPT2Model
|
{
"login": "visionscaper",
"id": 1189068,
"node_id": "MDQ6VXNlcjExODkwNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1189068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/visionscaper",
"html_url": "https://github.com/visionscaper",
"followers_url": "https://api.github.com/users/visionscaper/followers",
"following_url": "https://api.github.com/users/visionscaper/following{/other_user}",
"gists_url": "https://api.github.com/users/visionscaper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/visionscaper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/visionscaper/subscriptions",
"organizations_url": "https://api.github.com/users/visionscaper/orgs",
"repos_url": "https://api.github.com/users/visionscaper/repos",
"events_url": "https://api.github.com/users/visionscaper/events{/privacy}",
"received_events_url": "https://api.github.com/users/visionscaper/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@visionscaper thank you for raising the issue! It is a generalized problem with this check, which should only rely on the config's vocab size (which is the only reliable source of the actual vocabulary size at any given moment).\r\n\r\n@susnato opened a fix for GPT2, but other models will also need a fix as well",
"(@susnato -- I've assigned this issue to me so it doesn't get forgotten, but I'm counting on your aid π )",
"Hi @gante \r\nI have been hit by the same issue! Namely, after having added new tokens to the tokenizer (GPT2Tokenizer), and resized the token_embeddings of the model (TFGPT2LMHeadModel), the model.fit(...) throw errors the same as @visionscaper reported.\r\nWhen could you release a fix patch? Or is there a workaround solution for now?\r\n\r\nYou guys are doing a great job! And your support is highly appreciated! Cheers~",
"Hello @gante ! Thanks for your support. I also has faced the same issue as is commented by @visionscaper and @tqye2000. Especially, I tried to check almost every TFGPT2 based pretrained models released by huggingface and figured it out that resize_token_embeddings() does not work for all of them, even including the example code written in huggingface document. Hope this error gets fixed as soon as possible ! :)\r\n\r\n\r\nEDIT)\r\nAfter reading the comment below:\r\n> @visionscaper thank you for raising the issue! It is a generalized problem with this check, which should only rely on the config's vocab size (which is the only reliable source of the actual vocabulary size at any given moment).\r\n> \r\n> @susnato opened a fix for GPT2, but other models will also need a fix as well\r\n\r\nI installed the source version of transformers library, which the most latestes on-going code handled by huggingface.co, rather than installing a stable distribution version. Then, resize_token_emeddings() successfully worked with TFGPT2 module ! Thanks to @gante @susnato for fixing crucial errors to Tensorflow users. :)\r\n\r\n",
"@tqye2000 @CHLEE-Leo Hey π \r\n\r\nYes, the current source version has the issue fixed for TFGPT2. A new release of `transformers` should happen late next week, which will include this fix. The issue is present in other models, but hopefully will be sorted out soon as well. \r\n\r\nFYI, this issue appeared because we noticed a dangerous pattern in our embedding layers -- in TF, we can request to embed integers outside the bounds of the embedding layer and the code won't crash (returns a vector of zeros), which is extremely dangerous. I've added an out-of-bounds check, but forgot to account for the case with resized vocabulary π ",
"Fixed on all models, thanks to @susnato π§‘ ",
"Thanks @gante!",
"> Hello @gante ! Thanks for your support. I also has faced the same issue as is commented by @visionscaper and @tqye2000. Especially, I tried to check almost every TFGPT2 based pretrained models released by huggingface and figured it out that resize_token_embeddings() does not work for all of them, even including the example code written in huggingface document. Hope this error gets fixed as soon as possible ! :)\r\n> \r\n> EDIT) After reading the comment below:\r\n> \r\n> > @visionscaper thank you for raising the issue! It is a generalized problem with this check, which should only rely on the config's vocab size (which is the only reliable source of the actual vocabulary size at any given moment).\r\n> > @susnato opened a fix for GPT2, but other models will also need a fix as well\r\n> \r\n> I installed the source version of transformers library, which the most latestes on-going code handled by huggingface.co, rather than installing a stable distribution version. Then, resize_token_emeddings() successfully worked with TFGPT2 module ! Thanks to @gante @susnato for fixing crucial errors to Tensorflow users. :)\r\n\r\nHi\r\nCould you please show me where or how could I get the latest source version of transformers? Can I get it with pip upgrade?\r\nMany thanks!",
"Hey @tqye2000 π You can upgrade your `transformers` installation to match the current source version with `pip install --upgrade git+https://github.com/huggingface/transformers.git`",
"Thank you very much, @gante! After having upgraded to the current source version, the resize_token_emeddings() seems to be working now. However I get \"Allocation of 740033280 exceeds 10% of free system memory\" messages. I guess this is my PC's issue.\r\n\r\n",
"Hi @gante \r\nMay I ask another question. For fine tuning the gpt-2 model, should I pass the labels exactly the same as the inputs or should I shift the inputs by one token to create the labels? I get mixed information on the internet, some said the labels should be a copy of inputs, some examples showed the labels should be one-token shifted of the inputs.\r\nI apologise if here is not the right place for asking such questions!\r\nMany thanks! ",
"Hey @tqye2000 -- using the best possible reference, [the code itself](https://github.com/huggingface/transformers/blob/31336dcf3f93dee19cd13c981f16982d612040d2/src/transformers/models/gpt2/modeling_gpt2.py#L1068), you can see that you *don't* need to shift the inputs. In other words, labels = inputs, all shifting happens inside the model. I hope this helps π€ ",
"Hi @gante \r\nThank you very much for replying! \r\nIndeed I eventually dived into the code to see what's going on there and found:\r\n` if labels is not None:\r\n # Shift so that tokens < n predict n\r\n shift_logits = lm_logits[..., :-1, :].contiguous()\r\n shift_labels = labels[..., 1:].contiguous()\r\n`\r\nBut nevertheless it is good to have your confirmation!\r\n",
"Hi @gante \r\nI think I still need to shift the labels by 1 token by myself. I guess this may be to do with the way I am passing the dataset to the transformer model.\r\n`dataset= tf.data.Dataset.from_tensor_slices((inputs, labels))\r\ndataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\r\n\r\nhist = model.fit(dataset, epochs=4)\r\n`\r\nI just tested. If I didn't shift the labels myself, the fine tuning failed.\r\n\r\nPerhaps only if the labels is passed explicitly \"labels=labels\" to the model, then no need to shift beforehand. ",
"@tqye2000 that should not be needed -- with HF models, if the label is not provided, [we try to infer it](https://github.com/huggingface/transformers/blob/73a2ff69740123ef85343580cbfa9ee8ce5e6fd5/src/transformers/modeling_tf_utils.py#L1521) (which is the case for GPT2, where labels = inputs).\r\n\r\nI'd recommend seeing our example to fine-tune models like GPT2: https://github.com/huggingface/transformers/blob/main/examples/tensorflow/language-modeling/run_clm.py\r\n\r\n(and, if it still fails, to open a new issue with a snippet where we can reproduce the problem :) )"
] | 1,673
| 1,674
| 1,674
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-57-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante and @Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
After `add_special_tokens` to tokenizer and `resize_token_embeddings` on `TFGPT2Model`, evaluating the model results in an error that indicates that the embeddings are not resized as expected.
Please see the example code and the execution output below:
```
from transformers import GPT2Tokenizer, TFGPT2Model
SPECIAL_TOKENS_MAPPING = {
'bos_token': '<bos>',
'eos_token': '<eos>',
'pad_token': '<pad>',
'additional_special_tokens': ['<speaker1>', '<speaker2>']
}
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = TFGPT2Model.from_pretrained("gpt2")
print("Evaluating TFGPT2Model BEFORE extending the tokenizer and model with additional tokens ...")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
print(f"inputs = \n{inputs}\n")
outputs = model(inputs)
print(f"DONE!")
print("Adding tokens...")
orig_num_tokens = len(tokenizer.get_vocab())
num_special_tokens = tokenizer.add_special_tokens(SPECIAL_TOKENS_MAPPING)
print(f"orig_num_tokens = {orig_num_tokens}, num_special_tokens={num_special_tokens}")
model.resize_token_embeddings(new_num_tokens=orig_num_tokens + num_special_tokens)
print("Evaluating TFGPT2Model AFTER extending the tokenizer and model with additional tokens ...")
inputs = tokenizer("<speaker1>Hello, my dog is cute<speaker2>I agree!", return_tensors="tf")
print(f"inputs = \n{inputs}\n")
outputs = model(inputs)
print(f"DONE!")
```
```
Evaluating TFGPT2Model BEFORE extending the tokenizer and model with additional tokens ...
inputs =
{'input_ids': <tf.Tensor: shape=(1, 6), dtype=int32, numpy=array([[15496, 11, 616, 3290, 318, 13779]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(1, 6), dtype=int32, numpy=array([[1, 1, 1, 1, 1, 1]], dtype=int32)>}
DONE!
Adding tokens...
orig_num_tokens = 50257, num_special_tokens=5
Evaluating TFGPT2Model AFTER extending the tokenizer and model with additional tokens ...
inputs =
{'input_ids': <tf.Tensor: shape=(1, 11), dtype=int32, numpy=
array([[50260, 15496, 11, 616, 3290, 318, 13779, 50261, 40,
4236, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(1, 11), dtype=int32, numpy=array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>}
Traceback (most recent call last):
File "/home/freddy/workspace/Nuhame/mlpug/examples/chatbot/tensorflow/test_tf_resize_token_size.py", line 33, in <module>
outputs = model(inputs)
File "/home/freddy/.virtualenvs/mlpug-tf/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/freddy/.virtualenvs/mlpug-tf/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 432, in run_call_with_unpacked_inputs
return func(self, **unpacked_inputs)
File "/home/freddy/.virtualenvs/mlpug-tf/lib/python3.9/site-packages/transformers/models/gpt2/modeling_tf_gpt2.py", line 773, in call
outputs = self.transformer(
File "/home/freddy/.virtualenvs/mlpug-tf/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 432, in run_call_with_unpacked_inputs
return func(self, **unpacked_inputs)
File "/home/freddy/.virtualenvs/mlpug-tf/lib/python3.9/site-packages/transformers/models/gpt2/modeling_tf_gpt2.py", line 447, in call
tf.debugging.assert_less(
tensorflow.python.framework.errors_impl.InvalidArgumentError: Exception encountered when calling layer 'transformer' (type TFGPT2MainLayer).
input_ids must be smaller than the embedding layer's input dimension (got 50261 >= 50257)
Condition x < y did not hold.
First 3 elements of x:
[50260 15496 11]
First 1 elements of y:
[50257]
Call arguments received by layer 'transformer' (type TFGPT2MainLayer):
β’ input_ids=tf.Tensor(shape=(1, 11), dtype=int32)
β’ past_key_values=None
β’ attention_mask=tf.Tensor(shape=(1, 11), dtype=int32)
β’ token_type_ids=None
β’ position_ids=None
β’ head_mask=None
β’ inputs_embeds=None
β’ encoder_hidden_states=None
β’ encoder_attention_mask=None
β’ use_cache=True
β’ output_attentions=False
β’ output_hidden_states=False
β’ return_dict=True
β’ training=False
```
### Expected behavior
The model should have 50257 + 5 = 50262 embeddings after resizing and thus an input ID with value 50261 should not result in any errors. The above code should run without errors.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21053/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21052
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21052/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21052/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21052/events
|
https://github.com/huggingface/transformers/issues/21052
| 1,524,607,085
|
I_kwDOCUB6oc5a36ht
| 21,052
|
Fine-tune GIT on custom dataset [Expected input batch_size to match target batch_size]
|
{
"login": "vasyza",
"id": 101528345,
"node_id": "U_kgDOBg0zGQ",
"avatar_url": "https://avatars.githubusercontent.com/u/101528345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasyza",
"html_url": "https://github.com/vasyza",
"followers_url": "https://api.github.com/users/vasyza/followers",
"following_url": "https://api.github.com/users/vasyza/following{/other_user}",
"gists_url": "https://api.github.com/users/vasyza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasyza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasyza/subscriptions",
"organizations_url": "https://api.github.com/users/vasyza/orgs",
"repos_url": "https://api.github.com/users/vasyza/repos",
"events_url": "https://api.github.com/users/vasyza/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasyza/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@vasyza I'm doing research in swimming area and have the same issue. How to fix that?",
"@sgugger and other developers please help",
"I am not too sure how you want us to help without providing a reproducible example of the error you get."
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
deleted
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21052/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21051
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21051/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21051/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21051/events
|
https://github.com/huggingface/transformers/pull/21051
| 1,524,571,441
|
PR_kwDOCUB6oc5G7BP_
| 21,051
|
Add support for csv dataset files
|
{
"login": "ell-hol",
"id": 21223467,
"node_id": "MDQ6VXNlcjIxMjIzNDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/21223467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ell-hol",
"html_url": "https://github.com/ell-hol",
"followers_url": "https://api.github.com/users/ell-hol/followers",
"following_url": "https://api.github.com/users/ell-hol/following{/other_user}",
"gists_url": "https://api.github.com/users/ell-hol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ell-hol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ell-hol/subscriptions",
"organizations_url": "https://api.github.com/users/ell-hol/orgs",
"repos_url": "https://api.github.com/users/ell-hol/repos",
"events_url": "https://api.github.com/users/ell-hol/events{/privacy}",
"received_events_url": "https://api.github.com/users/ell-hol/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your PR, but csv datasets cannot work with the expected data format (nested dictionaries with languages)."
] | 1,673
| 1,673
| 1,673
|
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21051/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21051",
"html_url": "https://github.com/huggingface/transformers/pull/21051",
"diff_url": "https://github.com/huggingface/transformers/pull/21051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21051.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21050
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21050/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21050/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21050/events
|
https://github.com/huggingface/transformers/pull/21050
| 1,524,462,312
|
PR_kwDOCUB6oc5G6rVb
| 21,050
|
Patch-past-refactor
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
Should fix the test that broke `main`
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21050/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21050",
"html_url": "https://github.com/huggingface/transformers/pull/21050",
"diff_url": "https://github.com/huggingface/transformers/pull/21050.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21050.patch",
"merged_at": 1673284334000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21049
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21049/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21049/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21049/events
|
https://github.com/huggingface/transformers/pull/21049
| 1,524,429,250
|
PR_kwDOCUB6oc5G6lZ9
| 21,049
|
Fix warning for MCTC model
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Merging to fix the warning @ydshieh but can address any comment in a later PR :-) ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21049). All of your documentation changes will be reflected on that endpoint."
] | 1,673
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
In #20861, the warning introduced did not use the right direction for the test. This PR fixes that.
Fixes #21031
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21049/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21049/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21049",
"html_url": "https://github.com/huggingface/transformers/pull/21049",
"diff_url": "https://github.com/huggingface/transformers/pull/21049.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21049.patch",
"merged_at": 1673171723000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21048
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21048/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21048/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21048/events
|
https://github.com/huggingface/transformers/pull/21048
| 1,524,395,951
|
PR_kwDOCUB6oc5G6fSy
| 21,048
|
fix typo
|
{
"login": "sabaul",
"id": 66197673,
"node_id": "MDQ6VXNlcjY2MTk3Njcz",
"avatar_url": "https://avatars.githubusercontent.com/u/66197673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sabaul",
"html_url": "https://github.com/sabaul",
"followers_url": "https://api.github.com/users/sabaul/followers",
"following_url": "https://api.github.com/users/sabaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sabaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sabaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sabaul/subscriptions",
"organizations_url": "https://api.github.com/users/sabaul/orgs",
"repos_url": "https://api.github.com/users/sabaul/repos",
"events_url": "https://api.github.com/users/sabaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sabaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
Typo fix: Corrected the word metada --> metadata
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21048/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21048",
"html_url": "https://github.com/huggingface/transformers/pull/21048",
"diff_url": "https://github.com/huggingface/transformers/pull/21048.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21048.patch",
"merged_at": 1673168581000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21047
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21047/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21047/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21047/events
|
https://github.com/huggingface/transformers/pull/21047
| 1,524,251,729
|
PR_kwDOCUB6oc5G6Bu6
| 21,047
|
Remove Roberta Dependencies from XLM Roberta Flax and Tensorflow models
|
{
"login": "samuelzxu",
"id": 14795989,
"node_id": "MDQ6VXNlcjE0Nzk1OTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/14795989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuelzxu",
"html_url": "https://github.com/samuelzxu",
"followers_url": "https://api.github.com/users/samuelzxu/followers",
"following_url": "https://api.github.com/users/samuelzxu/following{/other_user}",
"gists_url": "https://api.github.com/users/samuelzxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuelzxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuelzxu/subscriptions",
"organizations_url": "https://api.github.com/users/samuelzxu/orgs",
"repos_url": "https://api.github.com/users/samuelzxu/repos",
"events_url": "https://api.github.com/users/samuelzxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuelzxu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks! There are a couple of places where the copy does not match the original. You can test locally with `make repo-consistency` to get the failing tests. Let me know if you need help!",
"HI @sgugger, thanks for the advice! I've tried `make repo-consistency` but it looks like the version of jax that was installed during `pip -e \".[dev]\"` causes `RuntimeError: jaxlib is version 0.1.75, but this version of jax requires version >= 0.3.0.`",
"Ah, it's a problem with my original dev install. I'll reinstall and see how it goes.",
"I can't figure out this last copy error, could you help me out? Thanks.",
"I won't be able to dive more into it until next week. Running `make fix-copies` and looking at the diff will give you a clue of what the copies util wants to change.",
"Got it, thanks for the tip",
"@sgugger I'm stuck on this error message as a part of `make repo-consistency`: \r\n```\r\npython utils/check_inits.py\r\nTraceback (most recent call last):\r\n File \"/home/ziggy/dev/transformers/utils/check_inits.py\", line 299, in <module>\r\n check_all_inits()\r\n File \"/home/ziggy/dev/transformers/utils/check_inits.py\", line 238, in check_all_inits\r\n raise ValueError(\"\\n\\n\".join(failures))\r\nValueError: Problem in src/transformers/__init__.py, both halves do not define the same objects.\r\nDifferences for tf backend:\r\n TFXLMRobertaForCausalLM in _import_structure but not in TYPE_HINT.\r\n TFXLMRobertaPreTrainedModel in _import_structure but not in TYPE_HINT.\r\nDifferences for flax backend:\r\n FlaxXLMRobertaForCausalLM in _import_structure but not in TYPE_HINT.\r\n FlaxXLMRobertaPreTrainedModel in _import_structure but not in TYPE_HINT.\r\n\r\nProblem in src/transformers/models/xlm_roberta/__init__.py, both halves do not define the same objects.\r\nDifferences for flax backend:\r\n FLAX_XLM_ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST in TYPE_HINT but not in _import_structure.\r\nmake: *** [Makefile:41: repo-consistency] Error 1\r\n```\r\n\r\nI don't know where the variable `TYPE_HINT` is, it doesn't seem to be anywhere in the entire repo apart from this error message.",
"Ah never mind, I found them. Thanks!",
"I'm confused by the error that's showing now - the doc builder can't find `TFXLMRobertaForCausalLM.forward` , which I don't think exists because it's tensorflow...",
"Finally, no errors π ",
"Thanks so much for all the help!"
] | 1,673
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This removes Roberta dependencies from XLM Roberta Flax and Tensorflow Models.
I'm a bit confused about whether the `name` parameter to `TFXLMRobertaMainLayer` should be `xml-roberta` or `xml_roberta` - I've gome with `xml-roberta` for now.
i.e. `self.XLMRoberta = TFXLMRobertaMainLayer(config, name="xlm-roberta")`
Fixes #19303
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21047/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21047",
"html_url": "https://github.com/huggingface/transformers/pull/21047",
"diff_url": "https://github.com/huggingface/transformers/pull/21047.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21047.patch",
"merged_at": 1674046180000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21046
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21046/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21046/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21046/events
|
https://github.com/huggingface/transformers/issues/21046
| 1,524,109,168
|
I_kwDOCUB6oc5a2A9w
| 21,046
|
Omatkasvot
|
{
"login": "Kuvajomppe",
"id": 98487575,
"node_id": "U_kgDOBd7NFw",
"avatar_url": "https://avatars.githubusercontent.com/u/98487575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kuvajomppe",
"html_url": "https://github.com/Kuvajomppe",
"followers_url": "https://api.github.com/users/Kuvajomppe/followers",
"following_url": "https://api.github.com/users/Kuvajomppe/following{/other_user}",
"gists_url": "https://api.github.com/users/Kuvajomppe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kuvajomppe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kuvajomppe/subscriptions",
"organizations_url": "https://api.github.com/users/Kuvajomppe/orgs",
"repos_url": "https://api.github.com/users/Kuvajomppe/repos",
"events_url": "https://api.github.com/users/Kuvajomppe/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kuvajomppe/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[] | 1,673
| 1,673
| 1,673
|
NONE
| null |
### Model description
omien kasvojen treenausta
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21046/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21045
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21045/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21045/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21045/events
|
https://github.com/huggingface/transformers/issues/21045
| 1,523,972,632
|
I_kwDOCUB6oc5a1foY
| 21,045
|
VisualBertTokenizer
|
{
"login": "mszsorondo",
"id": 52178350,
"node_id": "MDQ6VXNlcjUyMTc4MzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/52178350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mszsorondo",
"html_url": "https://github.com/mszsorondo",
"followers_url": "https://api.github.com/users/mszsorondo/followers",
"following_url": "https://api.github.com/users/mszsorondo/following{/other_user}",
"gists_url": "https://api.github.com/users/mszsorondo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mszsorondo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mszsorondo/subscriptions",
"organizations_url": "https://api.github.com/users/mszsorondo/orgs",
"repos_url": "https://api.github.com/users/mszsorondo/repos",
"events_url": "https://api.github.com/users/mszsorondo/events{/privacy}",
"received_events_url": "https://api.github.com/users/mszsorondo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,676
| 1,676
|
NONE
| null |
### Feature request
VisualBert takes 2 main inputs: tokenized text and tokenized images.
The text tokenization can already be handled by the BertTokenizer, but the visual tokenization still has no support, and is no trivial task. This visual tokens are built with embeddings derived from a set of regions, each one corresponding to the region of a detected object of the image from an object detector.
Here's a more detailed description of those embeddings from the [paper](https://arxiv.org/pdf/1908.03557.pdf):
Each embedding in F is computed by summing three embeddings:
f_o, a visual feature representation of the bounding region of f, computed by a convolutional neural network.
f_s, a segment embedding indicates it is an image embedding as opposed to a text embedding.
f_p, a position embedding, which is used when alignments between words and bounding regions are provided as part of the input, and set to the sum of the position embeddings corresponding to the aligned words.
As a tip, remember that some VisualBert checkpoints handle different visual embedding dimensions. You can use the [examples from the model docs](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/visual_bert.mdx) as a guide.
Also note that, given that the embedding depends of an object detector, this should be an explicit parameter of the visual tokenizer since different detectors will perform differently.
### Motivation
Building a visual embedding is conceptually simple, but implementing it is a tedious task, and there is no standard way to handle this directly with Transformers.
### Your contribution
This issue arised while building the ``` DummyVisualBertInputGenerator ``` as a requisite for exporting the model to ONNX in Optimum. This is still in progress.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21045/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21044
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21044/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21044/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21044/events
|
https://github.com/huggingface/transformers/pull/21044
| 1,523,665,848
|
PR_kwDOCUB6oc5G4Gqw
| 21,044
|
Add `min_new_tokens` argument in generate() (implementation based on `MinNewTokensLengthLogitsProcessor`)
|
{
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(@sgugger ready to merge if you agree. For context: this PR makes the `MinNewTokensLengthLogitsProcessor` usable from `.generate`, if the user passes `min_new_tokens` in the generate config or as an argument)"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #20756 #20814 #20614 (cc @gonced8 @kotikkonstantin)
As many said, it is better to add an argument `min_new_tokens` to the `.generate()` method to limit the length of newly generated tokens. The current parameter `min_length` limits the length of `prompt + newly generated tokens`, not the length of `newly generated tokens`.
I closed my old PR #20819 and implement this feature based on `MinNewTokensLengthLogitsProcessor` (see #20892) as suggested by @gante .
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- @gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21044/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21044/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21044",
"html_url": "https://github.com/huggingface/transformers/pull/21044",
"diff_url": "https://github.com/huggingface/transformers/pull/21044.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21044.patch",
"merged_at": 1673877729000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21043
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21043/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21043/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21043/events
|
https://github.com/huggingface/transformers/issues/21043
| 1,523,528,430
|
I_kwDOCUB6oc5azzLu
| 21,043
|
ConvNeXT V2
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hi @IMvision12! Thanks for taking this on, I think ConvNeXT V2 would be a great addition to transformers.\r\n\r\nIf you have any questions about the internal logic of the library or run into issues, you can ping me or @NielsRogge anytime. We can also create a Slack channel and continue the collaboration on the PR over there if you'd like. ",
"@IMvision12 I sent the invite, looking forward to adding ConvNeXT V2 to transformers!",
"Hello, I'd like to work on this issue. How do I get started?",
"Hi @asrimanth, I took over the PR and I'm almost done with it. Feel free to look at other open issues though!"
] | 1,673
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
### Model description
Short Description
Just released - ConvNeXt with a new internal layer.
In this paper, Authors propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition.
# Contribution
## I would like to work on this!
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Papers
https://arxiv.org/abs/2301.00808
Official Implementations
https://github.com/facebookresearch/ConvNeXt-V2
@NielsRogge @alaradirik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21043/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21042
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21042/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21042/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21042/events
|
https://github.com/huggingface/transformers/pull/21042
| 1,523,511,154
|
PR_kwDOCUB6oc5G3jdm
| 21,042
|
fix typo
|
{
"login": "kaisugi",
"id": 36184621,
"node_id": "MDQ6VXNlcjM2MTg0NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/36184621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaisugi",
"html_url": "https://github.com/kaisugi",
"followers_url": "https://api.github.com/users/kaisugi/followers",
"following_url": "https://api.github.com/users/kaisugi/following{/other_user}",
"gists_url": "https://api.github.com/users/kaisugi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaisugi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaisugi/subscriptions",
"organizations_url": "https://api.github.com/users/kaisugi/orgs",
"repos_url": "https://api.github.com/users/kaisugi/repos",
"events_url": "https://api.github.com/users/kaisugi/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaisugi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
typo fix (dictionnary -> dictionary)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21042/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21042",
"html_url": "https://github.com/huggingface/transformers/pull/21042",
"diff_url": "https://github.com/huggingface/transformers/pull/21042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21042.patch",
"merged_at": 1673082806000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21041
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21041/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21041/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21041/events
|
https://github.com/huggingface/transformers/issues/21041
| 1,523,492,650
|
I_kwDOCUB6oc5azqcq
| 21,041
|
Add Tri-Stage Scheduler, proposed in SpecAugment
|
{
"login": "jp1924",
"id": 93233241,
"node_id": "U_kgDOBY6gWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jp1924",
"html_url": "https://github.com/jp1924",
"followers_url": "https://api.github.com/users/jp1924/followers",
"following_url": "https://api.github.com/users/jp1924/following{/other_user}",
"gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jp1924/subscriptions",
"organizations_url": "https://api.github.com/users/jp1924/orgs",
"repos_url": "https://api.github.com/users/jp1924/repos",
"events_url": "https://api.github.com/users/jp1924/events{/privacy}",
"received_events_url": "https://api.github.com/users/jp1924/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Note that we won't accept new optimizer/scheduler in the Transformers library as the main goal of Transformers is models :-)\r\nYou can add the scheduler directly to an example however!",
"ok! thank you for your reply!"
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
### Feature request
paper: [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779)
code: [Fairseq Tri-Stage Scheudler](https://github.com/facebookresearch/fairseq/blob/main/fairseq/optim/lr_scheduler/tri_stage_lr_scheduler.py)
i want to add Tri-Stage Scheduler in huggingface
### Motivation
i have two motivation
- first, many ASR model using tri-stage scheduler on training, typically wav2vec2 in this case
- second, when i making the model, use tri-stage scheduler, so i thought it'd be better to post it while I'm making it.
### Your contribution
maybe it's need to modify optimization.py, trainer.py, trainingargument.py codes
```python
import math
from typing import Tuple, Union
from torch.optim import Optimizer
from torch.optim.lr_scheduler import LambdaLR
# [NOTE]: copied from https://github.com/facebookresearch/fairseq/blob/main/fairseq/optim/lr_scheduler/tri_stage_lr_scheduler.py
def get_tri_stage_scheduler_with_warmup(
optimizer: Optimizer,
num_training_steps: int,
final_lr: float,
num_warmup_steps: Union[int, float],
num_hold_steps: Union[int, float],
num_decay_steps: Union[int, float],
last_epoch: int = -1,
) -> LambdaLR:
default_lr = optimizer.defaults["lr"]
check_warmup_type = isinstance(num_warmup_steps, int)
warmup_steps = num_warmup_steps if check_warmup_type else num_warmup_steps * num_training_steps
check_hold_type = isinstance(num_hold_steps, int)
hold_steps = num_hold_steps if check_hold_type else num_hold_steps * num_training_steps
check_decay_type = isinstance(num_decay_steps, int)
decay_steps = num_decay_steps if check_decay_type else num_decay_steps * num_training_steps
if not (warmup_steps + hold_steps + decay_steps) <= num_training_steps:
raise ValueError("must don't exceed max_steps. but lr steps exceed max_step, please setting again")
warmup_factpr = default_lr / warmup_steps
decay_factor = -math.log(final_lr) / decay_steps
def _decide_stage(step: int) -> Tuple[int, int]:
# [NOTE]: warmup(rampup) stage
if step < warmup_steps:
return ("warm", step)
offset = warmup_steps
# [NOTE]: hold stage
if step < offset + hold_steps:
return ("hold", step - offset)
offset += hold_steps
# [NOTE]: decay stage
if step <= offset + decay_steps:
return ("decay", step - offset)
# [NOTE]: over stage
return "over", step - offset
def lr_lambda(current_step: int) -> float:
stage, step = _decide_stage(current_step)
if "warm" == stage:
compensator = (current_step if current_step else 1) * default_lr
learning_rate = (warmup_factpr * step) + compensator
elif "hold" == stage:
compensator = default_lr
learning_rate = default_lr**compensator
elif "decay" == stage:
compensator = default_lr
learning_rate = (default_lr**compensator) * math.exp(-decay_factor * step)
elif "over" == stage:
learning_rate = final_lr
return learning_rate
return LambdaLR(optimizer, lr_lambda, last_epoch)
```
## testing & result
I compared the fairseq tri-stage on the link with the tri-stage I made using matplotlib.
And I compared it with linear scheduler's warmup.
### maked tri-stage

### fairseq tri-stage

### linear tri stage

### all gather

it's worked well!
If you zoom in on 0.00025 hold steps, you can also see green!
You don't have to read it after this.
## why use compensator?
actually this tri-stage have float mismatch, if you compare output lr of this tri-stage and fairseq tri-stage, values is different when you test.
it's because LamdbaLR, LamdbaLR's a part that multiplies the value from the scheduler lr by default_lr. as a result output lr differentt fairseq tri-stage. so i used compensator for solved this issue
Since it's mathematically corrected, there could be some errors.
- average error of 9.361272723949663e-08 at warmup stage.
- average error of -4.569185665208767e-08 at hold stage.
- average error of -4.164950459092579e-08 at decay stage.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21041/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21040
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21040/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21040/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21040/events
|
https://github.com/huggingface/transformers/issues/21040
| 1,523,130,725
|
I_kwDOCUB6oc5aySFl
| 21,040
|
pytorch-pretrained-bert and transformers give different results
|
{
"login": "alicialitrtwe",
"id": 54757395,
"node_id": "MDQ6VXNlcjU0NzU3Mzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/54757395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alicialitrtwe",
"html_url": "https://github.com/alicialitrtwe",
"followers_url": "https://api.github.com/users/alicialitrtwe/followers",
"following_url": "https://api.github.com/users/alicialitrtwe/following{/other_user}",
"gists_url": "https://api.github.com/users/alicialitrtwe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alicialitrtwe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alicialitrtwe/subscriptions",
"organizations_url": "https://api.github.com/users/alicialitrtwe/orgs",
"repos_url": "https://api.github.com/users/alicialitrtwe/repos",
"events_url": "https://api.github.com/users/alicialitrtwe/events{/privacy}",
"received_events_url": "https://api.github.com/users/alicialitrtwe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @alicialitrtwe \r\n\r\nIn Huggingface the model is loaded from [here](https://huggingface.co/bert-large-cased)\r\nwhich as the description says has 336M paramaters\r\n\r\nThis model has the following configuration:(taken from [here](https://huggingface.co/bert-large-cased))\r\n * 24-layer\r\n * 1024 hidden dimension\r\n * 16 attention heads\r\n * 336M parameters.\r\n\r\nBut in https://github.com/Meelfy/pytorch_pretrained_BERT, the model has 340M parameters as the description says [here](https://github.com/Meelfy/pytorch_pretrained_BERT#doc)\r\n* bert-large-cased: 24-layer, 1024-hidden, 16-heads, 340M parameters\r\n\r\n\r\nSo, I believe you are getting different results depending on different implementations.\r\nActually in the `bert-large-cased` model card in huggingface there is a disclaimer suggesting this same problem, it says, \"Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.\" . You can read more about it [here](https://huggingface.co/bert-large-cased).\r\n\r\nI hope it solves your question, \r\n\r\nThanks,\r\nsusnato.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,676
| 1,676
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from pytorch_pretrained_bert import BertTokenizer, BertModel
import torch
torch.manual_seed(0)
tokenizer = BertTokenizer.from_pretrained('bert-large-cased', do_lower_case=False)
model = BertModel.from_pretrained('bert-large-cased')
model.eval()
model.to(device)
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = tokenizer.tokenize(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [1 for x in tokenized_text]
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
tokens_tensor = tokens_tensor.to('cuda')
segments_tensors = segments_tensors.to('cuda')
with torch.no_grad():
outputs = model(tokens_tensor, segments_tensors)
#%%
import torch
from transformers import BertTokenizer, BertModel, BertForMaskedLM
torch.manual_seed(0)
tokenizer = BertTokenizer.from_pretrained('bert-large-cased', do_lower_case=False)
model = BertModel.from_pretrained('bert-large-cased', output_hidden_states=True)
model.eval()
model.to(device)
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = tokenizer.tokenize(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [1 for x in tokenized_text]
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
tokens_tensor = tokens_tensor.to('cuda')
segments_tensors = segments_tensors.to('cuda')
with torch.no_grad():
outputs = model(tokens_tensor, segments_tensors)
### Expected behavior
```shell
The outputs from the 24 encoding layers should be identical for transformer and pytorch_pretrained_bert
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21040/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21039
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21039/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21039/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21039/events
|
https://github.com/huggingface/transformers/issues/21039
| 1,523,121,282
|
I_kwDOCUB6oc5ayPyC
| 21,039
|
low_cpu_mem_usage raises KeyError with modified GPT2 model
|
{
"login": "Wenhan-Tan",
"id": 15255883,
"node_id": "MDQ6VXNlcjE1MjU1ODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/15255883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wenhan-Tan",
"html_url": "https://github.com/Wenhan-Tan",
"followers_url": "https://api.github.com/users/Wenhan-Tan/followers",
"following_url": "https://api.github.com/users/Wenhan-Tan/following{/other_user}",
"gists_url": "https://api.github.com/users/Wenhan-Tan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wenhan-Tan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wenhan-Tan/subscriptions",
"organizations_url": "https://api.github.com/users/Wenhan-Tan/orgs",
"repos_url": "https://api.github.com/users/Wenhan-Tan/repos",
"events_url": "https://api.github.com/users/Wenhan-Tan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wenhan-Tan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @Wenhan-Tan \r\nI have made a PR regarding this issue, you can checkout the branch `fix_low_cpu_mem_usage` from my repository ([here](https://github.com/susnato/transformers/tree/fix_low_cpu_mem_usage)) and check if it solves your issue or not until the mods take any action on my PR or maybe merge it.\r\n\r\nThanks, \r\nsusnato.",
"Hi @susnato ,\r\nThank you! Your PR solves the issue! But I get another one when I use DeepSpeed inference afterwards. Not sure if they're related. Code is below:\r\n```\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoConfig\r\nimport deepspeed\r\n\r\nif __name__ == \"__main__\":\r\n model_id = \"gpt2\"\r\n model_config = AutoConfig.from_pretrained(pretrained_model_name_or_path=model_id)\r\n\r\n model_config.n_layer = 48\r\n model_config.n_head = 25\r\n model_config.n_embd = 1600\r\n model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_id,\r\n config=model_config,\r\n ignore_mismatched_sizes=True,\r\n torch_dtype=torch.float16,\r\n low_cpu_mem_usage=True)\r\n ds_config = {\r\n \"tensor_parallel\": {\"tp_size\": 1},\r\n \"dtype\": \"fp16\",\r\n \"replace_with_kernel_inject\": True,\r\n \"replace_method\": \"auto\",\r\n }\r\n ds_model = deepspeed.init_inference(model=model, config=ds_config)\r\n```\r\nI get errors below:\r\n```\r\nTraceback (most recent call last):\r\n File \"tmp.py\", line 23, in <module>\r\n ds_model = deepspeed.init_inference(model=model, config=ds_config)\r\n File \"/home/wenhant/.local/lib/python3.8/site-packages/deepspeed/__init__.py\", line 311, in init_inference\r\n engine = InferenceEngine(model, config=ds_inference_config)\r\n File \"/home/wenhant/.local/lib/python3.8/site-packages/deepspeed/inference/engine.py\", line 127, in __init__\r\n self.module.to(device)\r\n File \"/home/wenhant/.local/lib/python3.8/site-packages/transformers/modeling_utils.py\", line 1682, in to\r\n return super().to(*args, **kwargs)\r\n File \"/home/wenhant/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 987, in to\r\n return self._apply(convert)\r\n File \"/home/wenhant/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 639, in _apply\r\n module._apply(fn)\r\n File \"/home/wenhant/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 639, in _apply\r\n module._apply(fn)\r\n File \"/home/wenhant/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 662, in _apply\r\n param_applied = fn(param)\r\n File \"/home/wenhant/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 985, in convert\r\n return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\r\nNotImplementedError: Cannot copy out of meta tensor; no data!\r\n```\r\nThis error won't occur if I don't use the flag `low_cpu_mem_usage=True`."
] | 1,673
| 1,673
| 1,673
|
NONE
| null |
### System Info
```
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Not yet
- Using distributed or parallel set-up in script?: Not yet
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to test GPT2 models with different layer numbers, head numbers, and head sizes. The following code works with no errors. And the model is loaded successfully into the CPU with random weights, which is expected.
```
import torch
from transformers import AutoModelForCausalLM, AutoConfig
if __name__ == "__main__":
model_id = "gpt2"
model_config = AutoConfig.from_pretrained(pretrained_model_name_or_path=model_id)
model_config.n_layer = 48
model_config.n_head = 25
model_config.n_embd = 1600
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_id,
config=model_config,
ignore_mismatched_sizes=True,
torch_dtype=torch.float16)
```
However, when I set the flag `low_cpu_mem_usage=True` in `from_pretrained()` like this:
```
import torch
from transformers import AutoModelForCausalLM, AutoConfig
if __name__ == "__main__":
model_id = "gpt2"
model_config = AutoConfig.from_pretrained(pretrained_model_name_or_path=model_id)
model_config.n_layer = 48
model_config.n_head = 25
model_config.n_embd = 1600
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_id,
config=model_config,
ignore_mismatched_sizes=True,
torch_dtype=torch.float16,
low_cpu_mem_usage=True)
```
I get below errors:
```
/opt/conda/lib/python3.8/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.5)
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
Traceback (most recent call last):
File "tmp.py", line 11, in <module>
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_id,
File "/home/wenhant/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 463, in from_pretrained
return model_class.from_pretrained(
File "/home/wenhant/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2379, in from_pretrained
) = cls._load_pretrained_model(
File "/home/wenhant/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2512, in _load_pretrained_model
param = model_state_dict[key]
KeyError: 'h.45.attn.c_proj.bias'
```
### Expected behavior
I expect my code to run with no errors doesn't matter if I set `low_cpu_mem_usage` to `True` or `False`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21039/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21038
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21038/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21038/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21038/events
|
https://github.com/huggingface/transformers/pull/21038
| 1,523,083,471
|
PR_kwDOCUB6oc5G2LDy
| 21,038
|
Add: tensorflow example for image classification task guide
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you all for the reviews! \r\nI have added the final part - data augmentation (kudos to @sayakpaul for helping me troubleshoot the issues I was having). The example is now complete. Let me know if it looks good enough to be merged :) \r\ncc @amyeroberts @sgugger "
] | 1,673
| 1,674
| 1,673
|
CONTRIBUTOR
| null |
This PR addresses https://github.com/huggingface/transformers/issues/21037
It adds a Tensorflow example to the existing task guide on image classification.
State of the PR:
The example illustrates preprocessing in TF, training, and pushing to Hub. The code samples have been tested and they work/reproduce.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21038/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21038",
"html_url": "https://github.com/huggingface/transformers/pull/21038",
"diff_url": "https://github.com/huggingface/transformers/pull/21038.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21038.patch",
"merged_at": 1673976009000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21037
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21037/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21037/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21037/events
|
https://github.com/huggingface/transformers/issues/21037
| 1,523,073,934
|
I_kwDOCUB6oc5ayEOO
| 21,037
|
Add tensorflow example for the image classification task guide
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,673
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
For the same dataset and steps as in https://huggingface.co/docs/transformers/tasks/image_classification,
add sample code for the TensorFlow part.
This example can supplement the existing guide and can be helpful to those who choose TensorFlow over PyTorch and would like to use Transformers for image classification.
Related PR: https://github.com/huggingface/transformers/pull/21038
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21037/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21036
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21036/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21036/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21036/events
|
https://github.com/huggingface/transformers/pull/21036
| 1,522,867,493
|
PR_kwDOCUB6oc5G1cVB
| 21,036
|
remove flax from `documentation_tests.txt`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ah, this explains why the previous example had several issues π
\r\n\r\n@ydshieh We don't plan to add FLAX to the doctests, correct?",
"@gante Not a no from me. I think it's good to make sure the examples work (this is in the range of maintenance mode π ). But I would like to have a yes from @sgugger and @LysandreJik before working on it.",
"Not worth any work right now given the usage IMO. There is plenty to do with more priority :-) "
] | 1,673
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
#21009 added `src/transformers/generation/flax_utils.py` to `documentation_tests.txt`, but CI image doesn't have `jax/flax` installed. The whole doctest suite failed, and reported 0 failure.
This PR removes this file from `documentation_tests.txt`. The CI image used is the same as the scheduled CI, and it's intended not to have `jax/flax` in this image.
We can decide if to use a separate image though.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21036/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21036",
"html_url": "https://github.com/huggingface/transformers/pull/21036",
"diff_url": "https://github.com/huggingface/transformers/pull/21036.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21036.patch",
"merged_at": 1673177605000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21035
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21035/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21035/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21035/events
|
https://github.com/huggingface/transformers/pull/21035
| 1,522,793,950
|
PR_kwDOCUB6oc5G1MTR
| 21,035
|
feature: update wandb callback to upload checkpoints
|
{
"login": "parambharat",
"id": 12809212,
"node_id": "MDQ6VXNlcjEyODA5MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/12809212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parambharat",
"html_url": "https://github.com/parambharat",
"followers_url": "https://api.github.com/users/parambharat/followers",
"following_url": "https://api.github.com/users/parambharat/following{/other_user}",
"gists_url": "https://api.github.com/users/parambharat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parambharat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parambharat/subscriptions",
"organizations_url": "https://api.github.com/users/parambharat/orgs",
"repos_url": "https://api.github.com/users/parambharat/repos",
"events_url": "https://api.github.com/users/parambharat/events{/privacy}",
"received_events_url": "https://api.github.com/users/parambharat/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey, @sgugger : Thanks for the quick review and suggestions. I've resolved all the issues. :hugs:",
"@parambharat There is a weird issue with the tests. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-) and then pushing an empty commit?",
"Hey, @sgugger. I ran the fix that you suggested and all checks pass now.",
"Thank you @stevhliu. I've committed your recommendations.",
"Thanks again for your contribution!"
] | 1,673
| 1,704
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
The PR updates the `WandbCallback` with the following changes:
- Adds `on_save` method to upload model checkpoints as artifacts.
- Changes the default value of environment variable `WANDB_WATCH` from `gradients` to `false`. This enables quicker training when defaults are used. The user can easily change this behavior by setting the env variable.
- Changes the `WANDB_LOG_MODEL` variable from `bool` to `str` allowing for different settings to upload artifacts.
- Modifies the class dostring to reflect the above changes.
- Fixes broken link to wandb documentation
- Changes the wandb `run_name` from `output_dir` to wandb auto generated name. this avoids duplication of run names in wandb workspace
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
- trainer: @sgugger
- documentation: @stevhliu
## Examples
- Example [colab](https://colab.research.google.com/drive/17imujjBEL2cQL3odAJEbVvjR6zVsDRuH?usp=sharing) reflecting all the changes to the WandbCallback
- Example Weights & Biases [workspace](https://wandb.ai/parambharat/hf_transformers?workspace=user-parambharat) with runs that show different settings.
- Example Weights & Biases [Artifact](https://wandb.ai/parambharat/hf_transformers/artifacts/checkpoint/checkpoint-ajdcez6k/v4) created for checkpoints
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21035/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21035",
"html_url": "https://github.com/huggingface/transformers/pull/21035",
"diff_url": "https://github.com/huggingface/transformers/pull/21035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21035.patch",
"merged_at": 1673372602000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21034
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21034/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21034/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21034/events
|
https://github.com/huggingface/transformers/issues/21034
| 1,522,687,474
|
I_kwDOCUB6oc5awl3y
| 21,034
|
RAM Out-Of-Memory error with `run_mlm.py` when loading a 6Gb json dataset
|
{
"login": "RomanCast",
"id": 43135864,
"node_id": "MDQ6VXNlcjQzMTM1ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/43135864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RomanCast",
"html_url": "https://github.com/RomanCast",
"followers_url": "https://api.github.com/users/RomanCast/followers",
"following_url": "https://api.github.com/users/RomanCast/following{/other_user}",
"gists_url": "https://api.github.com/users/RomanCast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RomanCast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RomanCast/subscriptions",
"organizations_url": "https://api.github.com/users/RomanCast/orgs",
"repos_url": "https://api.github.com/users/RomanCast/repos",
"events_url": "https://api.github.com/users/RomanCast/events{/privacy}",
"received_events_url": "https://api.github.com/users/RomanCast/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Debugging seems to indicate that this OOM error happens when wrapping the model with DDP:\r\n\r\nhttps://github.com/huggingface/transformers/blob/48d4e147d824efab97637947709d5aa67c809b3d/src/transformers/trainer.py#L1446-L1451\r\n\r\nHowever, I don't really see why the data size would impact this line...",
"I found out that this happens with 4 GPUs, but not with 2 or 1 GPUs, so my workaround at the moment is to train with only 2 GPUs which is slower but doable.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,673
| 1,676
| 1,676
|
NONE
| null |
### System Info
- `transformers` version: 4.23.0.dev0
- Platform: Linux-4.18.0-305.65.1.el8_4.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes (DDP)
### Who can help?
@sgugger because it might be an error with the Trainer
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using the `run_mlm.py` script with almost no modifications using a Json dataset of ~6Gb, my job gets killed by SLURM. The stack trace looks like the following :
```
Traceback (most recent call last):
File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/launch.py", line 193, in <module>
main()
File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/run.py", line 752, in run
elastic_launch(
File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=======================================================
src/training/run_mlm.py FAILED
-------------------------------------------------------
Failures:
[1]:
time : 2023-01-06_11:50:42
host : r10i0n8-ib0
rank : 1 (local_rank: 1)
exitcode : -9 (pid: 870220)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 870220
-------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-01-06_11:50:42
host : r10i0n8-ib0
rank : 0 (local_rank: 0)
exitcode : -9 (pid: 870219)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 870219
=======================================================
slurmstepd: error: Detected 14 oom-kill event(s) in StepId=275597.batch. Some of your processes may have been killed by the cgroup out-of-memory handler.
```
What's weird is that the dataset is not that large (unfortunately I can't share it), and it worked fine with other datasets of similar size (for instance using ~4Gb of OSCAR text datasets). It also worked fine with another Json dataset of the same type but 50 times smaller.
I am putting the issue here because the last messages logged are the following :
```
[INFO|trainer.py:502] 2023-01-06 11:50:25,673 >> max_steps is given, it will override any value given in num_train_epochs
[INFO|trainer.py:556] 2023-01-06 11:50:25,673 >> Using cuda_amp half precision backend
[INFO|trainer.py:725] 2023-01-06 11:50:25,674 >> The following columns in the training set don't have a corresponding argument in `XLMRobertaForMaskedLM.forward` and have been ignored: special_tokens_mask. If special_tokens_mask are not expected by `XLMRobertaForMaskedLM.forward`, you can safely ignore this message.
```
which seems to indicate that it happens inside the Trainer method `_inner_training_loop`.
For reference, here is the command I'm running :
```shell
n_gpus=4
python -m torch.distributed.launch --nproc_per_node $n_gpus \
src/training/run_mlm.py \
--model_type xlm-roberta \
--config_overrides max_position_embeddings=512 \
--tokenizer_name tokenizers/br_tokenizer_30k \
--train_file ${TRAIN_FILE} \
--is_split_into_words \
--line_by_line \
--use_auth_token \
--validation_split_percentage 5 \
--max_eval_samples 5000 \
--max_seq_length 128 \
--eval_steps 500 \
--output_dir $OUTPUT_DIR \
--do_train \
--do_eval \
--load_best_model_at_end \
--metric_for_best_model "loss" \
--greater_is_better False \
--evaluation_strategy steps \
--per_device_train_batch_size $per_device_batch_size \
--per_device_eval_batch_size $per_device_batch_size \
--gradient_accumulation_steps $(( $per_device_total_batch_size / $per_device_batch_size )) \
--fp16 \
--learning_rate 2e-5 \
--weight_decay 1e-2 \
--max_steps 100_000 \
--warmup_steps 10_000 \
--logging_dir $TENSORBOARD_DIR \
--logging_steps 200 \
--save_strategy steps \
--save_steps 1000 \
--save_total_limit 2 \
--preprocessing_num_workers $(( 8 * $n_gpus )) \
--report_to tensorboard \
--seed 42
```
Some of the modifications to `run_mlm.py` involve using pre-tokenized datasets instead of raw text datasets by using the option `--is_split_into_words`.
Any idea why this happens or how to circumvent it ?
### Expected behavior
I would expect the Trainer to start training without OOM failure, especially given the fact that `run_mlm.py` tokenizes and groups the sentences in the dataset without OOM issues.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21034/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21033
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21033/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21033/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21033/events
|
https://github.com/huggingface/transformers/issues/21033
| 1,522,056,394
|
I_kwDOCUB6oc5auLzK
| 21,033
|
BertTokenizer not release gpu memory after del
|
{
"login": "jaewoo-so",
"id": 36718545,
"node_id": "MDQ6VXNlcjM2NzE4NTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/36718545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaewoo-so",
"html_url": "https://github.com/jaewoo-so",
"followers_url": "https://api.github.com/users/jaewoo-so/followers",
"following_url": "https://api.github.com/users/jaewoo-so/following{/other_user}",
"gists_url": "https://api.github.com/users/jaewoo-so/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaewoo-so/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaewoo-so/subscriptions",
"organizations_url": "https://api.github.com/users/jaewoo-so/orgs",
"repos_url": "https://api.github.com/users/jaewoo-so/repos",
"events_url": "https://api.github.com/users/jaewoo-so/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaewoo-so/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This code doesn't use any GPU memory as tokenizers don't even import `torch`.",
"@sgugger In my case, after run this code, gpu memory is fully occupied. Is your pytorch the gpu version correct?",
"This is because you import `TFBertModel` (I didn't catch it in your first code sample). This imports `tensorflow` which then takes all the GPU memory.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
### System Info
transformers 4.20.1
tensorflow 2.9.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import BertTokenizer, TFBertModel
import torch
max_seq_len = 2028
tokenizer = BertTokenizer.from_pretrained('klue/bert-base', truncation=True, max_seq_len=max_seq_len)
del tokenizer
torch.cuda.empty_cache()
```
### Expected behavior
In ubuntu20.04 with RTX3090, after del tokenizer, gpu memory is not released.
how can i release gpu memory?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21033/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21032
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21032/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21032/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21032/events
|
https://github.com/huggingface/transformers/pull/21032
| 1,521,806,169
|
PR_kwDOCUB6oc5Gxw6N
| 21,032
|
fix parameter name in docstring
|
{
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
fixes parameter name `return_tensor -> return_tensors` in docstring.
Fixes potential confusion.
I believe this is my biggest contribution yet π€£
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21032/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21032",
"html_url": "https://github.com/huggingface/transformers/pull/21032",
"diff_url": "https://github.com/huggingface/transformers/pull/21032.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21032.patch",
"merged_at": 1673007796000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21031
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21031/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21031/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21031/events
|
https://github.com/huggingface/transformers/issues/21031
| 1,521,623,124
|
I_kwDOCUB6oc5asiBU
| 21,031
|
[make repo-consistency] weird warning
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"super, thank you for fixing, Sylvain!"
] | 1,672
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
This doesn't make sense:
```
You are using torch==1.13.0, but torch>=1.9.0 is required to use MCTCTModel. Please upgrade torch.
```
since: `1.13 > 1.9` - a wrong comparison function?
full output:
```
$ make repo-consistency
python utils/check_copies.py
python utils/check_table.py
python utils/check_dummies.py
python utils/check_repo.py
Checking all models are included.
Checking all models are public.
You are using torch==1.13.0, but torch>=1.9.0 is required to use MCTCTModel. Please upgrade torch.
Checking all models are properly tested.
Checking all objects are properly documented.
[...]
```
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21031/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21030
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21030/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21030/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21030/events
|
https://github.com/huggingface/transformers/pull/21030
| 1,521,582,984
|
PR_kwDOCUB6oc5GxCeb
| 21,030
|
[bnb optim] fixing test
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The new test now passes on CI runners! I will review the change and thank you @stas00 β€οΈ !"
] | 1,672
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
**Note to reviewers: we are dealing with a slow test, so a green CI doesn't tell anything.**
This work is a continuation of https://github.com/huggingface/transformers/pull/21019 where this test failure was first reported by @ydshieh
This PR:
- extends/improves the `run_trainer` wrapper which simplifies the bnb test
- drops the percentage-based asserts as those are quite meaningless - since they don't measure the memory used by the optimizer but the whole memory - it replaces them with the actual calculated saved memory expectation since we know exactly what the saved memory should be for a particular model. it's `6*params` bytes but not for `nn.Embedding` which gets fp32 - so let's measure that carefully.
https://github.com/huggingface/transformers/blob/35a7052b61579cfe8df1a059d4cd3359310ec2d1/src/transformers/trainer.py#L1042-L1050
- drops the peak gpu memory comparison since on its own it's totally meaningless, in my testing I get both optims produce the same peak memory - what we care about is the total gpu memory.
- forces 1 gpu - so that the gpu memory usage is the same in all environments, to support 2+ gpus we need to have a different threshold for each - except this is totally unncessary
- switches to MBs everywhere, so it's much easier to debug
Now, the test should be very deterministic on any gpu/platform. I still gave a small margin for differences.
You can read my notes in the test for the exact math.
@ydshieh. please verify it works on the CI and we will then merge it. Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21030/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21030",
"html_url": "https://github.com/huggingface/transformers/pull/21030",
"diff_url": "https://github.com/huggingface/transformers/pull/21030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21030.patch",
"merged_at": 1673542375000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21029
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21029/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21029/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21029/events
|
https://github.com/huggingface/transformers/issues/21029
| 1,521,535,504
|
I_kwDOCUB6oc5asMoQ
| 21,029
|
Fix CLIP pooling for textual inversion so that eos tokens are taken
|
{
"login": "isamu-isozaki",
"id": 23430101,
"node_id": "MDQ6VXNlcjIzNDMwMTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/23430101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isamu-isozaki",
"html_url": "https://github.com/isamu-isozaki",
"followers_url": "https://api.github.com/users/isamu-isozaki/followers",
"following_url": "https://api.github.com/users/isamu-isozaki/following{/other_user}",
"gists_url": "https://api.github.com/users/isamu-isozaki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isamu-isozaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isamu-isozaki/subscriptions",
"organizations_url": "https://api.github.com/users/isamu-isozaki/orgs",
"repos_url": "https://api.github.com/users/isamu-isozaki/repos",
"events_url": "https://api.github.com/users/isamu-isozaki/events{/privacy}",
"received_events_url": "https://api.github.com/users/isamu-isozaki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
### Feature request
For textual inversion in diffusers, we are adding tokens that have a higher token id than the eos token. So when we get clip embeddings for textual inv tokens, we need to change the pooling so it gets the eos token and not the arg max token.
### Motivation
This is an issue that should be fixed as the clip embeddings won't work once we add more tokens to the tokenizer.
### Your contribution
I can make a pr for this. This is not an issue in the original implementation of the clip since they use pre-existing tokens in the embedding which has its own pro-cons.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21029/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21028
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21028/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21028/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21028/events
|
https://github.com/huggingface/transformers/pull/21028
| 1,521,528,879
|
PR_kwDOCUB6oc5Gw2xl
| 21,028
|
Refactor script to reduce complexity
|
{
"login": "milyiyo",
"id": 8120990,
"node_id": "MDQ6VXNlcjgxMjA5OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8120990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/milyiyo",
"html_url": "https://github.com/milyiyo",
"followers_url": "https://api.github.com/users/milyiyo/followers",
"following_url": "https://api.github.com/users/milyiyo/following{/other_user}",
"gists_url": "https://api.github.com/users/milyiyo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/milyiyo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/milyiyo/subscriptions",
"organizations_url": "https://api.github.com/users/milyiyo/orgs",
"repos_url": "https://api.github.com/users/milyiyo/repos",
"events_url": "https://api.github.com/users/milyiyo/events{/privacy}",
"received_events_url": "https://api.github.com/users/milyiyo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21028). All of your documentation changes will be reflected on that endpoint.",
"Thanks for your PR. We prefer the current style for examples as in general users have indicated they prefer to:\r\n- not have to look for intermediate functions but just read the code sequentially\r\n- prefer if return xxx else return yyy statements to the suggested changes in this PR.",
"Thanks for the feedback, I will keep it in mind :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR refactors some functions of the script `run_bart_dlm_flax.py`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21028/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21028",
"html_url": "https://github.com/huggingface/transformers/pull/21028",
"diff_url": "https://github.com/huggingface/transformers/pull/21028.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21028.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21027
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21027/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21027/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21027/events
|
https://github.com/huggingface/transformers/pull/21027
| 1,521,450,209
|
PR_kwDOCUB6oc5GwlgQ
| 21,027
|
[issues template] update deepspeed owners
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"That's an excellent idea, Sylvain. Added.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21027). All of your documentation changes will be reflected on that endpoint."
] | 1,672
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
add the right contact for deepspeed@accelerate
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21027/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21027",
"html_url": "https://github.com/huggingface/transformers/pull/21027",
"diff_url": "https://github.com/huggingface/transformers/pull/21027.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21027.patch",
"merged_at": 1674091416000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21026
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21026/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21026/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21026/events
|
https://github.com/huggingface/transformers/pull/21026
| 1,521,444,117
|
PR_kwDOCUB6oc5GwkNe
| 21,026
|
Fix arguments passed to predict function in QA Seq2seq training script
|
{
"login": "Observer46",
"id": 48843187,
"node_id": "MDQ6VXNlcjQ4ODQzMTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/48843187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Observer46",
"html_url": "https://github.com/Observer46",
"followers_url": "https://api.github.com/users/Observer46/followers",
"following_url": "https://api.github.com/users/Observer46/following{/other_user}",
"gists_url": "https://api.github.com/users/Observer46/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Observer46/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Observer46/subscriptions",
"organizations_url": "https://api.github.com/users/Observer46/orgs",
"repos_url": "https://api.github.com/users/Observer46/repos",
"events_url": "https://api.github.com/users/Observer46/events{/privacy}",
"received_events_url": "https://api.github.com/users/Observer46/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
I used the script for training Seq2seq QA and realized that `--do_predict` contains a bug - kwarg `outputs` should be an instance of class `EvalLoopOutput`, but NumPy array is passed instead. Together with extracting predictions in the body of the method `post_processing_function` (line 610 in run_seq2seq_qa.py) this results in an error every time you try to run tests with this script.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@karthikrangasai @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21026/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/21026/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21026",
"html_url": "https://github.com/huggingface/transformers/pull/21026",
"diff_url": "https://github.com/huggingface/transformers/pull/21026.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21026.patch",
"merged_at": 1673007583000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21025
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21025/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21025/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21025/events
|
https://github.com/huggingface/transformers/issues/21025
| 1,521,372,873
|
I_kwDOCUB6oc5ark7J
| 21,025
|
Import error because no actual libraries, as far as I can tell.
|
{
"login": "Shikamaru5",
"id": 86502093,
"node_id": "MDQ6VXNlcjg2NTAyMDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/86502093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shikamaru5",
"html_url": "https://github.com/Shikamaru5",
"followers_url": "https://api.github.com/users/Shikamaru5/followers",
"following_url": "https://api.github.com/users/Shikamaru5/following{/other_user}",
"gists_url": "https://api.github.com/users/Shikamaru5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shikamaru5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shikamaru5/subscriptions",
"organizations_url": "https://api.github.com/users/Shikamaru5/orgs",
"repos_url": "https://api.github.com/users/Shikamaru5/repos",
"events_url": "https://api.github.com/users/Shikamaru5/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shikamaru5/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I'm not sure where you are looking but `huggingface_hub` init does have a `CommitOperationAdd` object [here](https://github.com/huggingface/huggingface_hub/blob/ccdfd33ede1500b364d3561ccd6d4b2cc76fe9b2/src/huggingface_hub/__init__.py#L106) and it's been there since 0.10.0 which is the minimal version of huggingface_hub required.",
"Is there a way to fix this easily or do I have to go through all of the modules and replace the ones that will try and call huggingface_hub with the modules it should be calling?",
"I have no idea what you mean here. The code you mention does not match what is actually in the libraries, so it looks like you should just update them to the latest versions and check that your Python environment is actually using those (and not some random older versions).",
"I guess I'm just a little confused because I have transformers version 4.25.1 and huggingface_hub version 0.11.1, I don't have any different versions whether it be Ubuntu or Windows. When I look on Github and go to src/transformers/utils there is a hub.py, and in it as far as I can see has the code that I have shown it wishes to import. When I go to the huggingface_hub __init__ it has the lines in a submodule function that tries to import it as strings.\r\n\r\n Trouble is, why is it going to the __init__, should it not just go to the modules to find the classes it is looking for? Or does __init__ actually serve more of a purpose for that code. After all I've never found Python likes going indirectly through several different messagers, say it calls CommitOperationAdd in hub.py. This goes to __init__ in huggingface_hub, which then goes to .hf_api, which calls from ._commit_api.\r\n\r\n I'd think it'd be easier for hub.py to just call from huggingface_hub import _commit_api. Unless what you're saying is huggingface has a library that doesn't match what is on Github or what you can pip install which in that case, may I have that library?\r\n\r\n I feel like it's more confusing to write out a paragraph about this than it is to just look at the code snippet I provided, however, I can show you the exact code starting from hub.py and maybe even a picture of where I found it located, and do each of these steps all the way to _commit_api. My environment does not want to go any farther until I figure out a way for it to import the class 'CommitOperationAdd'. I just don't know if I told it to grab directly from _commit_api if it'd break the entire program.\r\n\r\nI do appreciate you being patient with me and trying to help me figure this out though, don't get me wrong, and whenever you have suggested something I have looked into it. Like today I updated my Ubuntu, Pip, Transformers and Huggingface_hub, although besides Ubuntu everything was up to date.\r\n\r\nI may end up having to try and not use DeepSpeed because it just seems to have some pretty big bugs, maybe I'll try DeepSparse.",
"Hi @Shikamaru5, sorry not getting back to you before. `huggingface_hub` expose most of its top-level attributes in the `__init__.py` module (`create_commit`, `create_repo`, `CommitOperationAdd`,...). You can see the complete list [here](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/__init__.py#L310). This is by design so that users don't have to know in which submodule they can find the methods they need. It also guarantees some flexibility for us: as`huggingface_hub._commit_api` should not be imported directly by users (it is \"private\"), we can make changes to it without caring about backward compatibility as long as the top-level attributes are still in `huggingface_hub/__init__.py`.\r\n\r\nWhat can be confusing when reading the code is that we are doing \"lazy-loading\". When doing `from huggingface_hub import something`, you are only importing `something` and the modules needed to make it work. This speed-up initialization by a lot since it is very rare that you require all modules at once. If you are interested in reading more about it, please have a look at [this PR](https://github.com/huggingface/huggingface_hub/pull/874).\r\n\r\nNow, back to your problem. As @sgugger mentionned that can be caused by a broken environment with older versions of the libs. For example here it seems that you are using Python 3.10 (which is good) but the anaconda path seems to be referring to Python 3.6 (`\"/home/user/anaconda3/lib/python3.6/site-packages/transformers/utils/hub.py\"`). Could that be the cause of your issue?\r\n\r\nIn any case, I guarantee you that once you have a clean environment both lines should load correctly in your Python interpreter:\r\n```py\r\nfrom huggingface_hub import CommitOperationAdd\r\nfrom transformers.utils.hub import CommitOperationAdd\r\n```",
"I'm doing this all locally but before, when my wsl was working, I had got it working by just directly importing from the places I needed to import from. You are correct about the python 3.6 because once I got past that issue it said python 3.6 was bad so I tried to get python fixed, broke all of it, uninstalled wsl and Ubuntu, and now for some reason after a few days of trying and even having someone who used to work on Ubuntu for wsl, on Twitter trying to help me fix it, I haven't been able to do it. My next thought is I've found a program called Colossal AI and I'm going to see if that'll work instead of trying DeepSpeed or DeepSparse. Thank you for taking the time to check out my issue and see how you can help it's really appreciated.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.11.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.5
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: C:\Users\name\.huggingface\token
- Has saved token ?: False
- Configured git credential helpers: manager-core
- FastAI: N/A
- Tensorflow: 2.9.1
- Torch: 1.12.1+cu116
- Jinja2: 3.0.2
- Graphviz: N/A
- Pydot: N/A
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.25.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.5
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: want to yes
- Using distributed or parallel set-up in script?: maybe?... If my understanding of deepspeed is right than I think so.
### Who can help?
@pacman100
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
accelerate config
Traceback (most recent call last):
File "/home/user/anaconda3/bin/accelerate", line 5, in <module>
from accelerate.commands.accelerate_cli import main
File "/home/user/anaconda3/lib/python3.6/site-packages/accelerate/__init__.py", line 7, in <module>
from .accelerator import Accelerator
File "/home/user/anaconda3/lib/python3.6/site-packages/accelerate/accelerator.py", line 27, in <module>
from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
File "/home/user/anaconda3/lib/python3.6/site-packages/accelerate/checkpointing.py", line 24, in <module>
from .utils import (
File "/home/user/anaconda3/lib/python3.6/site-packages/accelerate/utils/__init__.py", line 101, in <module>
from .megatron_lm import (
File "/home/user/anaconda3/lib/python3.6/site-packages/accelerate/utils/megatron_lm.py", line 32, in <module>
from transformers.modeling_outputs import (
File "/home/user/anaconda3/lib/python3.6/site-packages/transformers/__init__.py", line 30, in <module>
from . import dependency_versions_check
File "/home/user/anaconda3/lib/python3.6/site-packages/transformers/dependency_versions_check.py", line 17, in <module>
from .utils.versions import require_version, require_version_core
File "/home/user/anaconda3/lib/python3.6/site-packages/transformers/utils/__init__.py", line 59, in <module>
from .hub import (
File "/home/user/anaconda3/lib/python3.6/site-packages/transformers/utils/hub.py", line 32, in <module>
from huggingface_hub import (
ImportError: cannot import name 'CommitOperationAdd'
The transformers library has in the utils folder, __init__, in this file it has:
from huggingface_hub import (
CommitOperationAdd,
HfFolder,
create_commit,
create_repo,
get_hf_file_metadata,
hf_hub_download,
hf_hub_url,
whoami,
)
however huggingface_hub doesn't have these files, so am I missing something or is that transformers needs to be updated? I have the latest version and the version from github locally for both huggingface_hub and transformers. Now that I'm looking at it, there seems to be another import mistake that'll be flagged once I get past this one:
from huggingface_hub.utils import (
EntryNotFoundError,
LocalEntryNotFoundError,
RepositoryNotFoundError,
RevisionNotFoundError,
hf_raise_for_status,
)
Would appreciate any assistance with this, and since Idk if this could be considered a Huggingface_hub error or a transformers error I'll post it on both.
### Expected behavior
I guess I expect to run into the next bug until I've solved all the bugs and can use accelerate and deepspeed, (fingers crossed, knock on wood). For a non tongue and cheek answer I think that according to the code at least it should import from these different libraries and files that is described in the hub.py script, pulled from the huggingface_hub library. Obviously, I could be mistaken, but python isn't having it and I haven't be able to find where, what hub.py wants.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21025/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21024
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21024/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21024/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21024/events
|
https://github.com/huggingface/transformers/issues/21024
| 1,521,224,550
|
I_kwDOCUB6oc5arAtm
| 21,024
|
transformers/examples/tensorflow/tokenclassification: Error at prepare_tf_dataset() using demo code with default parameters.
|
{
"login": "adaml-iri",
"id": 90871321,
"node_id": "MDQ6VXNlcjkwODcxMzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/90871321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adaml-iri",
"html_url": "https://github.com/adaml-iri",
"followers_url": "https://api.github.com/users/adaml-iri/followers",
"following_url": "https://api.github.com/users/adaml-iri/following{/other_user}",
"gists_url": "https://api.github.com/users/adaml-iri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adaml-iri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adaml-iri/subscriptions",
"organizations_url": "https://api.github.com/users/adaml-iri/orgs",
"repos_url": "https://api.github.com/users/adaml-iri/repos",
"events_url": "https://api.github.com/users/adaml-iri/events{/privacy}",
"received_events_url": "https://api.github.com/users/adaml-iri/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"When I attempt to run demo I get following error:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\transformer\\foo\\lib\\site-packages\\transformers\\tokenization_utils_base.py\", line 717, in convert_to_tensors\r\n tensor = as_tensor(value)\r\nValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\transformer\\transformers\\examples\\tensorflow\\token-classification\\run_ner.py\", line 592, in <module>\r\n main()\r\n File \"D:\\transformer\\transformers\\examples\\tensorflow\\token-classification\\run_ner.py\", line 415, in main\r\n tf_train_dataset = model.prepare_tf_dataset(\r\n File \"D:\\transformer\\foo\\lib\\site-packages\\transformers\\modeling_tf_utils.py\", line 1384, in prepare_tf_dataset\r\n tf_dataset = dataset.to_tf_dataset(\r\n File \"D:\\transformer\\foo\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 405, in to_tf_dataset\r\n output_signature, columns_to_np_types = dataset._get_output_signature(\r\n File \"D:\\transformer\\foo\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 258, in _get_output_signature\r\n test_batch = collate_fn(test_batch, **collate_fn_args)\r\n File \"D:\\transformer\\foo\\lib\\site-packages\\transformers\\data\\data_collator.py\", line 43, in __call__\r\n return self.tf_call(features)\r\n File \"D:\\transformer\\foo\\lib\\site-packages\\transformers\\data\\data_collator.py\", line 347, in tf_call\r\n batch = self.tokenizer.pad(\r\n File \"D:\\transformer\\foo\\lib\\site-packages\\transformers\\tokenization_utils_base.py\", line 3017, in pad\r\n return BatchEncoding(batch_outputs, tensor_type=return_tensors)\r\n File \"D:\\transformer\\foo\\lib\\site-packages\\transformers\\tokenization_utils_base.py\", line 210, in __init__\r\n self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)\r\n File \"D:\\transformer\\foo\\lib\\site-packages\\transformers\\tokenization_utils_base.py\", line 733, in convert_to_tensors\r\n raise ValueError(\r\nValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).\r\n\r\nMy problem seems to be with prepare_tf_dataset() when run locally in python script.\r\nI have noticed that when I try notebooks/examples/token_classification-tf.ipynb in Google Colab everything works fine. ",
"@adaml-iri \r\n\r\n**How to solve the issue :** \r\n\r\nAdd \"--pad_to_max_length True\" as an argument, so to start training you need to write, \r\n`python run_ner.py --model_name_or_path bert-base-uncased --dataset_name conll2003 --output_dir /tmp/test-ner --pad_to_max_length True`\r\n\r\n**Why is it happening ?**\r\n\r\nIt's due to shape mismatch in the training sample's labels.(In line 717 in https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py) where the code is trying to convert the labels to numpy array using `np.asarray` but all the examples doesn't have labels with same shape so it's happening. \r\n\r\n\r\n**Here is the output you might see if this is resolved :** \r\n\r\n...\r\nAll model checkpoint layers were used when initializing TFBertForTokenClassification.\r\n\r\nSome layers of TFBertForTokenClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nYou're using a BertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nNo loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss.\r\n***** Running training *****\r\n Num examples = 14041\r\n Num Epochs = 3.0\r\n Instantaneous batch size per device = 8\r\n Total train batch size = 8\r\n2023-01-07 00:49:52.596921: W tensorflow/core/framework/dataset.cc:769] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations.\r\nEpoch 1/3\r\n 110/1755 [>.............................] - ETA: 2:28:11 - loss: 0.3548\r\n\r\n\r\n\r\nLet me know if you managed to resolve it or not, \r\n\r\nThanks, \r\nsusnato.",
"Thank you for your quick response. Everything works now.\r\nMuch appreciated π ",
"Hi @adaml-iri - sorry for the delay with dealing with this! I'm glad your issue got resolved, but when I run the code locally I don't get the same issue, and I don't think that example should require `pad_to_max_length` to be set to work. Can you try updating `datasets` as well with `pip install --upgrade datasets` and then checking if the issue persists?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,677
| 1,677
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. git clone https://github.com/huggingface/transformers
2. cd transformers
3. pip install .
4. cd examples\tensorflow\token-classification
5. pip install -r requirements.txt
6. python run_ner.py \
--model_name_or_path bert-base-uncased \
--dataset_name conll2003 \
--output_dir /tmp/test-ner
### Expected behavior
Expected example to fine-tunes BERT on CoNLL-2003.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21024/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21023
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21023/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21023/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21023/events
|
https://github.com/huggingface/transformers/pull/21023
| 1,521,159,647
|
PR_kwDOCUB6oc5GvlP-
| 21,023
|
Fix bigbird random attention
|
{
"login": "Bearnardd",
"id": 43574448,
"node_id": "MDQ6VXNlcjQzNTc0NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/43574448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bearnardd",
"html_url": "https://github.com/Bearnardd",
"followers_url": "https://api.github.com/users/Bearnardd/followers",
"following_url": "https://api.github.com/users/Bearnardd/following{/other_user}",
"gists_url": "https://api.github.com/users/Bearnardd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bearnardd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bearnardd/subscriptions",
"organizations_url": "https://api.github.com/users/Bearnardd/orgs",
"repos_url": "https://api.github.com/users/Bearnardd/repos",
"events_url": "https://api.github.com/users/Bearnardd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bearnardd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sanchit-gandhi! Thank you very much for this detailed review. It is really helpful since this is my first time working with JAX :). I will apply the changes during the weekend. Have a great day!",
"Awesome, very glad to hear that the pointers were helpful π€ feel free to post here if you have any questions - it's a bit of a fiddly fix and I'm more than happy to help if you get stuck on anything!\r\n\r\nThere's actually a similar rng trick that we use in Flax BEIT:\r\nhttps://github.com/huggingface/transformers/blob/b210c83a78022226ce48402cd67d8c8da7afbd8d/src/transformers/models/beit/modeling_flax_beit.py#L161\r\n\r\nYou can follow through the logic we employ with `\"droppath\"` and `droppath_rng` to see a working example of what we want to do here!",
"Hi @sanchit-gandhi! Sorry for the late response but lately I was in the process of changing workplaces as well as on vacation so I have not checked github for a while :). I have implemented your comments but I have two follow up questions:\r\n\r\n1) Should I remove all `numpy` calls in the modeling file even the ones like `np.zeros` or `np.arange` or only the ones related to the randomness?\r\n\r\n2) I have some problems with `indices_prng_key` for the scenario when `FlaxBigBirdBlockSparseAttention` is used but `deterministic=True` for which `indices_prng_key=None`. Since even though deterministic is set to False the random jax functions are still being called and in this case the provided `rng_key=None` which results in the error. ",
"Hey @Bearnardd! Awesome to see that you've picked-up this PR again!\r\n\r\n1. Yes please! If you could replace all NumPy calls with their JAX equivalents that would be grand! This will keep all tensors on the accelerator device (GPU/TPU) rather than pulling them back to the host\r\n2. In this case, could we add `if/else` logic that returns the correct attention mask when deterministic? E.g.\r\n```python\r\nif self.deterministic:\r\n # do the deterministic inference attention with no randomness\r\nelse:\r\n # do the stochastic training attention with jnp randomness\r\n```\r\nA similar logic is used in the Flax dropout module: https://flax.readthedocs.io/en/latest/_modules/flax/linen/stochastic.html#Dropout",
"Hi @sanchit-gandhi! I have replaced all NumPy calls but frankly I am not sure if I understand the second part correctly. Could you explain what do you mean by `deterministic inference attention` and where that `if/else` logic should be places? ",
"Hey @Bearnardd! Very cool! Do you mind pushing your changes to origin so that I can see the code? This might make it easier to help-out with the deterministic issue!\r\n\r\nEssentially, we only want to do the random operations when we're training, not at inference time. During inference, we want everything to be deterministic. This is like dropout - we only do this during training and not inference, when we want to disable dropout and have all the nodes be active.\r\n\r\nWe can check if the model is deterministic through the attribute `self.determisitic` (like `self.training` in PyTorch). What we need to do is add some logic so that the random calls are only made _if_ `self.deterministic=False` (training): we know we're in training mode and we want all of the randomness, so we activate all the random calls. \r\n_Else_ `self.deterministic=True` (inference) and we're indeterministic, then we don't want to do any of the randomness, e.g. skip all of it.",
"Hi @sanchit-gandhi! Sure I will push the changes around Friday since I am currently at a business trip and I do not have my personal laptop :/",
"Hi @sanchit-gandhi! I have pushed the changes.",
"Hi @sanchit-gandhi all copied from statements are back, without one for PredictionHead since different dtype still counts are not copied and it results in the error",
"Hi @amyeroberts @sanchit-gandhi! I changed if checking to `deterministic` and added `unittestskip` for equivalence tests. Probably around weekend I will create a issue regarding bug in Pytorch's implementation as well as PR fix. Nevertheless I guess this PR is ready to be merged.",
"Yep - it all looks good to me. Thanks again for this contribution, @Bearnardd! "
] | 1,672
| 1,682
| 1,682
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the bug mentioned in the [issue](https://github.com/huggingface/transformers/issues/17355) by transiting from `np.random` to the `jax.random`. It also adds several minor changes to be able to run the new code and pass the all the tests
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/issues/17355
## Before submitting
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
## Who can review?
@sanchit-gandhi @thevasudevgupta @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21023/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21023",
"html_url": "https://github.com/huggingface/transformers/pull/21023",
"diff_url": "https://github.com/huggingface/transformers/pull/21023.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21023.patch",
"merged_at": 1682617948000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21022
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21022/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21022/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21022/events
|
https://github.com/huggingface/transformers/pull/21022
| 1,520,826,975
|
PR_kwDOCUB6oc5GucYE
| 21,022
|
[NumPy] Remove references to deprecated NumPy type aliases
|
{
"login": "hvaara",
"id": 1535968,
"node_id": "MDQ6VXNlcjE1MzU5Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hvaara",
"html_url": "https://github.com/hvaara",
"followers_url": "https://api.github.com/users/hvaara/followers",
"following_url": "https://api.github.com/users/hvaara/following{/other_user}",
"gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hvaara/subscriptions",
"organizations_url": "https://api.github.com/users/hvaara/orgs",
"repos_url": "https://api.github.com/users/hvaara/repos",
"events_url": "https://api.github.com/users/hvaara/events{/privacy}",
"received_events_url": "https://api.github.com/users/hvaara/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger I heard you might be the right person to review this. Please let me know if you have any questions π€ "
] | 1,672
| 1,673
| 1,672
|
CONTRIBUTOR
| null |
This change replaces references to a number of deprecated NumPy type aliases (np.bool, np.int, np.float, np.complex, np.object, np.str) with their recommended replacement (bool, int, float, complex, object, str).
NumPy 1.24 drops the deprecated aliases, so we must remove uses before updating NumPy.
See huggingface/diffusers#1810 for a similar issue in diffusers.
Co-authored-by: Peter Hawkins <phawkins@google.com>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21022/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21022/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21022",
"html_url": "https://github.com/huggingface/transformers/pull/21022",
"diff_url": "https://github.com/huggingface/transformers/pull/21022.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21022.patch",
"merged_at": 1672941730000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21021
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21021/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21021/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21021/events
|
https://github.com/huggingface/transformers/pull/21021
| 1,520,799,249
|
PR_kwDOCUB6oc5GuWMQ
| 21,021
|
`blip` support for training
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Perfect, thanks for clarifying @sgugger ! ",
"_The documentation is not available anymore as the PR was closed or merged._",
"@younesbelkada @sgugger hi, thanks for contributing this code, but I found two possible bugs:\r\n\r\n1. the code shift `labels` to `decoder_input_id` ([here](https://github.com/huggingface/transformers/pull/21021/files#diff-e483643fc206cde147f2483924507d9a407db540b01bf4028c72b8ec6cc3ffabR1209)) and the code shift `labels` when computing loss [(here)](https://github.com/huggingface/transformers/pull/21021/files#diff-00846f08e1b2a41509f5f669a49fc36baac8555a83da24c30f8ac9e7a9024d59R900) should only keep one, and I prefer to keep the former one and delete the later.\r\n2. The BERT tokenizer has added a start token before the sequence, and the `_shift_right` function will add another one (pad), so it should use `forced_bos_token_id` like BART for generation.",
"Moreover, I think the [`reduction`](https://github.com/huggingface/transformers/pull/21021/files#diff-e483643fc206cde147f2483924507d9a407db540b01bf4028c72b8ec6cc3ffabR1227) function of `CrossEntropyLoss` should be set to `'mean'`, or you will get a loss more than tens or hundreds, which is uncommon and may affect the optimization.",
"Thanks for your valuable comments @StevenTang1998! @younesbelkada in any case it would probably be best to have verified this branch in a notebook on a toy image captioning dataset. Making the code as similar as possible to our other generative models (like T5, BART or GPT-2) would be great.",
"Hi @younesbelkada, I encountered the same error as mentioned by @dxlong2000.\r\nI cloned this repository but the error is still there.\r\n\r\nValueError: Expected input batch_size (0) to match target batch_size (29).",
"Hi @faiqff94 \r\nAll the issues related to BLIP training should be resolved, if you follow what has been done in https://colab.research.google.com/drive/1lbqiSiA0sDF7JDWPeS0tccrM85LloVha?usp=sharing you should not get any issue. Can you share a reproducible handy script?"
] | 1,672
| 1,677
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes: https://discuss.huggingface.co/t/finetune-blip-on-customer-dataset-20893/28446
Before this PR, it was not possible to fine-tune BLIP on a custom dataset due to various reasons, mainly because the code did not supported "on-the-fly" right shifting of `decoder_input_ids`.
This PR also harmonizes some attributes inside `BlipForQuestionAnswering` --> I replaced `decoder_bos_token_id ` by `decoder_start_token_id` to make it consistent with T5 etc.
For all VQA models we should (at train time):
1- make sure `labels` is not None
2- create `decoder_input_ids` based on those (make sure the padding is always on the right side)
3- Infer on the text decoder
I feel that we should probably add more tests and create a `VisualQuestionAnsweringMixin` in a follow up PR to make sure this is done for all VQA models (as I'd expect more VQA models to be added this year)
cc @NielsRogge @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21021/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21021",
"html_url": "https://github.com/huggingface/transformers/pull/21021",
"diff_url": "https://github.com/huggingface/transformers/pull/21021.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21021.patch",
"merged_at": 1674037478000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21020
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21020/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21020/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21020/events
|
https://github.com/huggingface/transformers/pull/21020
| 1,520,602,901
|
PR_kwDOCUB6oc5GtqhV
| 21,020
|
Time series transformer: input projection and Std scaler
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@NielsRogge I have one more fix here... I can make `static_real_features` be optional",
"@NielsRogge i believe this can be merged so i can then start to add these changes to the informer model... what do you think?"
] | 1,672
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Add initial input projection layer and `d_model` hyperparam
Added a StdScaler for time series transformer as well as corresponding features.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21020/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21020/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21020",
"html_url": "https://github.com/huggingface/transformers/pull/21020",
"diff_url": "https://github.com/huggingface/transformers/pull/21020.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21020.patch",
"merged_at": 1677048614000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21019
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21019/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21019/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21019/events
|
https://github.com/huggingface/transformers/pull/21019
| 1,520,575,681
|
PR_kwDOCUB6oc5Gtkcm
| 21,019
|
Fix `test_run_seq2seq_bnb` CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @manuelciosici",
"hmm, I didn't write this test and don't know why these numbers were set, so perhaps it's easier to ask the one who wrote it?\r\n\r\nIf it's not possible please let me know I will be able to study it later, as I'm off to the airport now.",
"but yes, such tests should do the measurements per number of gpus usually. i.e. 1 gpu - measurement 1, 2 gpus - measurement 2, etc.\r\n\r\nthe easiest fix is to ensure you run on exactly that many gpus always and then you need only one \"truth\" to measure against. ",
"(we can investigate later, not urgent)\r\n\r\n@stas00 The first error occurred is that we expect the GPU memory usage will be larger when not using BNB (i.e. `gpu_peak_mem_orig `) than when using BNB (i.e. `gpu_peak_mem_bnb`). The test assert 10% difference. However, on our single GPU runner, \r\n\r\n```bash\r\ngpu_peak_mem_orig=509447168\r\ngpu_peak_mem_bnb=510622720\r\n```\r\nwhich is quite weird (intuitively).\r\n\r\n- We can definitely adjust the values for different environment, but probably it's a good idea to understand what's going on here if possible.\r\n- We left comment in the original PR page, but haven't heard from the PR author. But I cc them in a comment in this PR.\r\n\r\n ",
"some quick thoughts:\r\n- do we clear cuda cache there between the measurements? and first call `gc.collect()` (then cache clear) \r\n- using a larger model should make the difference (savings) more distinct\r\n\r\nand of course the test might be failing if bnb is broken - recent update? try earlier version?",
"I can reproduce the failure, looking",
"Fixed here https://github.com/huggingface/transformers/pull/21030\r\n",
"Close in favor of #21030 21030"
] | 1,672
| 1,675
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
Fix `test_run_seq2seq_bnb` CI. See [failed job run](https://github.com/huggingface/transformers/actions/runs/3834635537/jobs/6527258225)
It seems the expected reduced GPU memory usage only happens when running the test on multi GPU env.
I simply add `require_torch_multi_gpu` without trying to understand why it fails with single GPU env.
I can try to figure it out, but the probability that @stas00 knows the reason > 1.0, so I would like to see if he has any comment first.
Error
```
tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_bnb
(line 256) AssertionError: -0.0023021928988980356 not greater than 10 : should use very little peak gpu memory with BNB, compared to without itbut got gpu_peak_mem_orig=509447168 and gpu_peak_mem_bnb=510622720
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21019/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21019",
"html_url": "https://github.com/huggingface/transformers/pull/21019",
"diff_url": "https://github.com/huggingface/transformers/pull/21019.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21019.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21018
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21018/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21018/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21018/events
|
https://github.com/huggingface/transformers/pull/21018
| 1,520,569,000
|
PR_kwDOCUB6oc5Gti9X
| 21,018
|
Generate: post-generate config TF doctest fix
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
MEMBER
| null |
# What does this PR do?
Same as in this PR https://github.com/huggingface/transformers/pull/20804, but for TF π€¦
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21018/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21018",
"html_url": "https://github.com/huggingface/transformers/pull/21018",
"diff_url": "https://github.com/huggingface/transformers/pull/21018.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21018.patch",
"merged_at": 1672918718000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21017
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21017/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21017/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21017/events
|
https://github.com/huggingface/transformers/issues/21017
| 1,520,527,501
|
I_kwDOCUB6oc5aoWiN
| 21,017
|
The generation input shape and the output shape from the official scripts are completely different for the TFLite model
|
{
"login": "generic-matrix",
"id": 15347450,
"node_id": "MDQ6VXNlcjE1MzQ3NDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/15347450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/generic-matrix",
"html_url": "https://github.com/generic-matrix",
"followers_url": "https://api.github.com/users/generic-matrix/followers",
"following_url": "https://api.github.com/users/generic-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/generic-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/generic-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/generic-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/generic-matrix/orgs",
"repos_url": "https://api.github.com/users/generic-matrix/repos",
"events_url": "https://api.github.com/users/generic-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/generic-matrix/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante and @Rocketknight1 ",
"@sgugger @gante @Rocketknight1 any update on the same ?\r\n\r\nI even tried the same with the collab below taking latest version of transformers tensorflow into consideration , I get the same issue as above. When we import the new model with different output shape onto the android project (gpt2) , I get the issue as below :\r\n\r\n```\r\nE/AndroidRuntime: FATAL EXCEPTION: main\r\n Process: co.huggingface.android_transformers.gpt2, PID: 17293\r\n java.lang.IllegalArgumentException: ByteBuffer is not a valid flatbuffer model\r\n at org.tensorflow.lite.NativeInterpreterWrapper.createModelWithBuffer(Native Method)\r\n at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:60)\r\n at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:224)\r\n at co.huggingface.android_transformers.gpt2.ml.GPT2Client$loadModel$2.invokeSuspend(GPT2Client.kt:138)\r\n at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)\r\n at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:241)\r\n at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:594)\r\n at kotlinx.coroutines.scheduling.CoroutineScheduler.access$runSafely(CoroutineScheduler.kt:60)\r\n at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:740)\r\n```\r\n\r\nhttps://gist.github.com/sudo-carson/158d9b9e7208e42977b08d966f3f4989\r\n\r\n",
"Hello @sgugger @gante @Rocketknight1 \r\n\r\nThe issue has been fixed, the below code can be used.\r\n\r\nThe output should be keras_output.logits as in the code below \r\n\r\n```\r\nmodel = TFGPT2LMHeadModel.from_pretrained('gpt2') # or 'distilgpt2'\r\ninput = tf.keras.Input([ 64 ], batch_size=1, dtype=tf.int32)\r\nkeras_output = model(input, training=False)\r\nmodel = tf.keras.Model(input, keras_output.logits)\r\nconverter = tf.lite.TFLiteConverter.from_keras_model(model)\r\n\r\n# For FP16 quantization:\r\n# converter.optimizations = [tf.lite.Optimize.DEFAULT]\r\n# converter.target_spec.supported_types = [tf.float16]\r\n\r\ntflite_model = converter.convert()\r\n\r\nopen(\"model.tflite\", \"wb\").write(tflite_model)\r\n```",
"Hey @generic-matrix π \r\n\r\nThank you for raising the issue and for reporting its fix as well <3 For context, we haven't been checking whether our models are supported by TFLite. That's something we plan to rectify over this year, with notebooks and demos as well!\r\n\r\n(closing as it is fixed)"
] | 1,672
| 1,673
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 2.3.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
and
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patric @anton-l @sanchit-gandhi @Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Upon cloning the gpt2.py from the [link ](https://github.com/huggingface/tflite-android-transformers/tree/master/models_generation) renders input shape as [3 5] instead of [1 64]
```
import transformers
import tensorflow
print(transformers.__version__)
print(tensorflow.__version__)
```
```
2.3.0
2.2.0
```
```
!wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-64.tflite
import numpy as np
import tensorflow as tf
tflite_model_path = 'gpt2-64.tflite'
# Load the TFLite model and allocate tensors
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.allocate_tensors()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
#print the output
input_data = np.array(np.random.random_sample((input_shape)), dtype=np.int32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data.shape)
print(input_shape)
```
```
>(1, 64, 50257)
>[ 1 64]
```
```
import tensorflow as tf
from transformers import TFGPT2LMHeadModel
import numpy as np
model = TFGPT2LMHeadModel.from_pretrained('gpt2') # or 'distilgpt2'
input_spec = tf.TensorSpec([1, 64], tf.int32)
model._set_inputs(input_spec, training=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# For FP16 quantization:
# converter.optimizations = [tf.lite.Optimize.DEFAULT]
# converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
open("gpt2-64-2.tflite", "wb").write(tflite_model)
tflite_model_path = 'gpt2-64-2.tflite'
# Load the TFLite model and allocate tensors
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.allocate_tensors()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
#print the output
input_data = np.array(np.random.random_sample((input_shape)), dtype=np.int32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data.shape)
print(input_shape)
```
```
>(3, 5, 50257)
>[3 5]
```
**Expected
>(1, 64, 50257)
>[ 1 64]**
```
import transformers
import tensorflow
print(transformers.__version__)
print(tensorflow.__version__)
>4.25.1
> 2.9.2
```
```
!wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-64.tflite
import numpy as np
import tensorflow as tf
tflite_model_path = 'gpt2-64.tflite'
# Load the TFLite model and allocate tensors
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.allocate_tensors()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
#print the output
input_data = np.array(np.random.random_sample((input_shape)), dtype=np.int32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data.shape)
print(input_shape)
```
```
>(1, 64, 50257)
> [ 1 64]
```
```
import tensorflow as tf
from transformers import TFGPT2LMHeadModel
import numpy as np
model = TFGPT2LMHeadModel.from_pretrained('gpt2') # or 'distilgpt2'
input_spec = tf.TensorSpec([1, 64], tf.int32)
model._set_inputs(input_spec, training=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# For FP16 quantization:
# converter.optimizations = [tf.lite.Optimize.DEFAULT]
# converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
open("gpt2-64-2.tflite", "wb").write(tflite_model)
tflite_model_path = 'gpt2-64-2.tflite'
# Load the TFLite model and allocate tensors
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.allocate_tensors()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
#print the output
input_data = np.array(np.random.random_sample((input_shape)), dtype=np.int32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data.shape)
print(input_shape)
```
```
>(2, 1, 12, 1, 64)
>[1 1]
```
**Expected
>(1, 64, 50257)
>[ 1 64]**
How can we fix the same ?
### Expected behavior
Expected inputshape is [ 1 64] and output shape is (1, 64, 50257)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21017/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21016
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21016/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21016/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21016/events
|
https://github.com/huggingface/transformers/issues/21016
| 1,520,412,878
|
I_kwDOCUB6oc5an6jO
| 21,016
|
VideoMAE missing CLS tokens in embedding
|
{
"login": "z5163449",
"id": 30581408,
"node_id": "MDQ6VXNlcjMwNTgxNDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/30581408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/z5163449",
"html_url": "https://github.com/z5163449",
"followers_url": "https://api.github.com/users/z5163449/followers",
"following_url": "https://api.github.com/users/z5163449/following{/other_user}",
"gists_url": "https://api.github.com/users/z5163449/gists{/gist_id}",
"starred_url": "https://api.github.com/users/z5163449/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/z5163449/subscriptions",
"organizations_url": "https://api.github.com/users/z5163449/orgs",
"repos_url": "https://api.github.com/users/z5163449/repos",
"events_url": "https://api.github.com/users/z5163449/events{/privacy}",
"received_events_url": "https://api.github.com/users/z5163449/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nVideoMAE doesn't use a CLS token, so this can be fixed in the docstring. The number of tokens sent through the Transformer equals (number of frames // tubelet_size) * (height // patch_size) * (width // patch_size).\r\n\r\nFor video classification, the authors average pool the final hidden states of the tokens before applying a final classification head.\r\n\r\nDo you mind opening a PR to fix [this docstring](https://github.com/huggingface/transformers/blob/8fb4d0e4b46282d96386c229b9fb18bf7c80c25a/src/transformers/models/videomae/modeling_videomae.py#L901-L902)?",
"@NielsRogge Hi, sorry for coming back to this, and this may be a more general question, but why would the authors use the final hidden states of the model (that would more closely resemble the inputs again), instead of an intermediate state? I know the shapes are fixed and its not a compressing autoencoder, but why the last hidden state?",
"People typically use the last hidden states of Transformer-based models as features for classification layers. One of the first papers that did this was BERT."
] | 1,672
| 1,698
| 1,673
|
NONE
| null |
### System Info
I'm not sure if I've missed something in the code, but I can't seem to find where the CLS tokens are added? I have input data of shape (64,45,2,32,32) with tubelet size = 5, patch_size = 4. This results in a sequence length of 576. From my understanding that is the total number of tubelets. I see that after the data is passed through the embedding layer the final embedding shape is (64,576,768) where 768 is the hidden size. However, should the dimensions not be (64,577,768) since we should be adding a CLS token to the sequence?
Would be great to get hear back soon because I'm not sure if I'm wrong or if there is something wrong with the code.
Thanks!
@NielsRogge
### Reproduction
pixel_values = torch.randn(1,45, 2, 32, 32)
config = VideoMAEConfig()
config.num_frames = 45
config.image_size = 32
config.patch_size = 4
config.tubelet_size = 5
config.num_channels = 2
num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2
seq_length = (model.config.num_frames // model.config.tubelet_size) * num_patches_per_frame
print(seq_length.shape)
videomae = VideoMAEModel(config)
output = videomae(pixel_values, output_hidden_states=True)
sequence_output = output[0]
print(sequence_output.shape)
### Expected behavior
seq_length = 576
sequence_output = (1,577,768)
The embedding sequence length should be total number of tubelets + 1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21016/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21015
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21015/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21015/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21015/events
|
https://github.com/huggingface/transformers/issues/21015
| 1,520,191,220
|
I_kwDOCUB6oc5anEb0
| 21,015
|
Domain-specific word similarity from documents question-answer
|
{
"login": "VikasRathod314",
"id": 113010352,
"node_id": "U_kgDOBrxmsA",
"avatar_url": "https://avatars.githubusercontent.com/u/113010352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VikasRathod314",
"html_url": "https://github.com/VikasRathod314",
"followers_url": "https://api.github.com/users/VikasRathod314/followers",
"following_url": "https://api.github.com/users/VikasRathod314/following{/other_user}",
"gists_url": "https://api.github.com/users/VikasRathod314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VikasRathod314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VikasRathod314/subscriptions",
"organizations_url": "https://api.github.com/users/VikasRathod314/orgs",
"repos_url": "https://api.github.com/users/VikasRathod314/repos",
"events_url": "https://api.github.com/users/VikasRathod314/events{/privacy}",
"received_events_url": "https://api.github.com/users/VikasRathod314/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
I am trying to create a chat boat-like application (inspired by chat GPT). The boat or you can say an application should be able to answer questions of a software/products respective Help document.
I have tried to finetune tilbert_base_uncased model from hugging face on less than 100 annotated Question-answer in the form of squad format. but my model is not performing well. the F1 score is about 0.3. Can anyone suggest important approaches or docs related to Question answering-based QA implementation who worked on the same problem?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21015/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21014
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21014/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21014/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21014/events
|
https://github.com/huggingface/transformers/pull/21014
| 1,519,839,666
|
PR_kwDOCUB6oc5GrAZP
| 21,014
|
Update task summary part 1
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> It is called \"what Transformers can do\", so I think the introductions of each modality should just explain what each modality is and abstain from comparing models\r\n\r\nThanks for the feedback, this helped me refine the scope of this page! It was a little difficult trying to discuss the tasks and not be tempted to also talk about the models since the two are so closely related. π
I updated the intro of each modality with an explanation of the input data and how to get it into a useable format by the model to solve a task."
] | 1,672
| 1,673
| 1,673
|
MEMBER
| null |
This PR reworks the task summary to be more conceptual and provides more explanation about a topic to help users better understand it. It'll be focused more on understanding instead of practical steps. The update will be split into two parts:
1. Describe the tasks π€ Transformers is capable of solving (the focus of this PR). Provide some context about how these tasks used to be solved, how they're handled now, and practical applications of each task.
2. Explain how π€ Transformers solve these tasks. This'll be a more conceptually advanced and separate page.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21014/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21014",
"html_url": "https://github.com/huggingface/transformers/pull/21014",
"diff_url": "https://github.com/huggingface/transformers/pull/21014.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21014.patch",
"merged_at": 1673636514000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21013
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21013/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21013/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21013/events
|
https://github.com/huggingface/transformers/issues/21013
| 1,519,646,645
|
I_kwDOCUB6oc5ak_e1
| 21,013
|
HF models use deprecated pytorch function invocations
|
{
"login": "ngimel",
"id": 15841449,
"node_id": "MDQ6VXNlcjE1ODQxNDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/15841449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ngimel",
"html_url": "https://github.com/ngimel",
"followers_url": "https://api.github.com/users/ngimel/followers",
"following_url": "https://api.github.com/users/ngimel/following{/other_user}",
"gists_url": "https://api.github.com/users/ngimel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ngimel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngimel/subscriptions",
"organizations_url": "https://api.github.com/users/ngimel/orgs",
"repos_url": "https://api.github.com/users/ngimel/repos",
"events_url": "https://api.github.com/users/ngimel/events{/privacy}",
"received_events_url": "https://api.github.com/users/ngimel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
},
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks a lot for flagging @ngimel !\r\n\r\ncc @ArthurZucker and @ydshieh \r\n",
"Could use of `torch.where` be the cause of these `torch.compile()` errors in `AutoModelForSeq2SeqLM` models?\r\n\r\n```text\r\n File \"/home/kastanday/utils/mambaforge3/envs/torch2.0/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py\", line 949, in <graph break in forward>\r\n raise ValueError(f\"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds\")\r\nValueError: You have to specify either input_ids or inputs_embeds\r\n```\r\nUnsupported `ConstantVariable(str)`:\r\n```\r\n File \"/home/kastanday/utils/mambaforge3/envs/torch2.0/lib/python3.10/site-packages/torch/_dynamo/exc.py\", line 71, in unimplemented\r\n raise Unsupported(msg)\r\ntorch._dynamo.exc.Unsupported: call_function BuiltinVariable(ValueError) [ConstantVariable(str)] {}\r\n```\r\n\r\n\r\nMinimal reproduction:\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\n\r\nmodel_name = \"google/flan-t5-base\"\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_name)\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\nmodel = torch.compile(model) # PyTorch 2.0\r\n\r\nfrom torch import _dynamo\r\n_dynamo.config.verbose = True\r\n_dynamo.explain(model)\r\n```\r\n\r\nThanks for any guidance. ",
"Should be fixed on monday the PR is ready π "
] | 1,672
| 1,677
| 1,677
|
NONE
| null |
Masks are frequently created in `uint8` type, see e.g. here https://github.com/huggingface/transformers/blob/8fb4d0e4b46282d96386c229b9fb18bf7c80c25a/src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py#L186 or here https://github.com/huggingface/transformers/blame/52dd2b61bff8af5b6409fdd5ec92a9b3114f3636/src/transformers/models/codegen/modeling_codegen.py#L101, and then used in `torch.where`. Use of `uint8` masks in `torch.where` has been deprecated for couple years, and though it still works in pytorch eager (with a warning), support for this has been removed in `torch.compile`. It would be good to audit places where uint8 masks are used and replace them with bool masks.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21013/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21013/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21012
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21012/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21012/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21012/events
|
https://github.com/huggingface/transformers/pull/21012
| 1,519,597,707
|
PR_kwDOCUB6oc5GqKjv
| 21,012
|
Add document token classification pipeline (#1)
|
{
"login": "vaishak2future",
"id": 2349706,
"node_id": "MDQ6VXNlcjIzNDk3MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2349706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vaishak2future",
"html_url": "https://github.com/vaishak2future",
"followers_url": "https://api.github.com/users/vaishak2future/followers",
"following_url": "https://api.github.com/users/vaishak2future/following{/other_user}",
"gists_url": "https://api.github.com/users/vaishak2future/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vaishak2future/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vaishak2future/subscriptions",
"organizations_url": "https://api.github.com/users/vaishak2future/orgs",
"repos_url": "https://api.github.com/users/vaishak2future/repos",
"events_url": "https://api.github.com/users/vaishak2future/events{/privacy}",
"received_events_url": "https://api.github.com/users/vaishak2future/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21012). All of your documentation changes will be reflected on that endpoint.",
"Hi @vaishak2future \r\n\r\nDid you know that layoutlm already implements `object-detection` : https://huggingface.co/Narsil/layoutlmv3-finetuned-funsd\r\n\r\nThis might be close enough to this, no ?",
"@Narsil , thank you for looking at the PR. While Object Detection does solve this particular instance of the problem, we see Document Token Classification as a multimodal task separate from the unimodal task of Object Detection. Document Token Classification requires two modalities - an image and a set of tokens.\r\n\r\nThis gives control to the user to use their OCR of choice (especially for languages that are not well handled by Tesseract), but also to choose their own tokens that might not be text on the image itself. ",
"@Narsil All checks are now passing. Could you please review? Thanks.",
"Hi @vaishak2future ,\r\n\r\nI understand the ideas to remove the Tesseract where needed. For the extra tokens, where you imagining extracting tokens from PDF directly maybe ? (This was also an idea behind `document-question-answering` where the idea is that we could always fuse the pipeline later with regular `visual-question-answering`).\r\n\r\nHere there are a few things that make me hesitant:\r\n\r\n- Pipelines are made to be usable by non ML programmers, here, it's kind of tricky since tokens and boxes and such are quite ML involved\r\n- Pipelines are made to be relatively generic over different model types, here only layoutlm would work as-is. The idea is to keep the number of pipelines relatively small, so discoverable by users.\r\n\r\nThat being said, enabling power users like your use case should be supported IMO. I would have to look at how to implement within `object-detection`. But I don't see any issue with adding extra parameters for such niche, but extremely useful use-cases.\r\nFor instance `asr` pipeline enables users to send the raw audio frames directly which IMO is seemingly the same idea (bypass or modify very specifically some preprocessing which would be the OCR in your case)\r\n\r\nWhat do you think ?\r\n\r\nPinging @sgugger @LysandreJik for other opinions on this.\r\n\r\nRegardless, I briefly looked at the PR, the code seems good, there are a few nits regarding how tests are structured and how many different inputs are accepted, but overall it looks quite good. I'll delay my comments after we reach a decision on this as there's no big structural blockers on my end imo.",
"This looks very specific to one model. We can't host all possible pipelines in Transformers, so in such a case, we should rely on the code on the Hub for pipeline feature. You can see pointers [here](https://huggingface.co/docs/transformers/add_new_pipeline#share-your-pipeline-on-the-hub).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds Pipeline for Document Token Classification. Code is mostly based on PR for Document Question Answering. https://github.com/huggingface/transformers/pull/18414
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
@Narsil
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21012/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21012",
"html_url": "https://github.com/huggingface/transformers/pull/21012",
"diff_url": "https://github.com/huggingface/transformers/pull/21012.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21012.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21011
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21011/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21011/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21011/events
|
https://github.com/huggingface/transformers/pull/21011
| 1,519,534,155
|
PR_kwDOCUB6oc5Gp8ru
| 21,011
|
Bump gitpython from 3.0.2 to 3.1.30 in /examples/research_projects/distillation
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21011). All of your documentation changes will be reflected on that endpoint."
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
[//]: # (dependabot-start)
β οΈ **Dependabot is rebasing this PR** β οΈ
Rebasing might not happen immediately, so don't worry if this takes some time.
Note: if you make any changes to this PR yourself, they will take precedence over the rebase.
---
[//]: # (dependabot-end)
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.0.2 to 3.1.30.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>v3.1.30 - with important security fixes</h2>
<p>See <a href="https://github-redirect.dependabot.com/gitpython-developers/GitPython/issues/1515">gitpython-developers/GitPython#1515</a> for details.</p>
<h2>3.1.20</h2>
<p>No release notes provided.</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/141cd651e459bff8919798b3ccf03dfa167757f6"><code>141cd65</code></a> adjust changelog prior to release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/678a8fe08dd466fcfe8676294b52887955138960"><code>678a8fe</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/gitpython-developers/GitPython/issues/1521">#1521</a> from stsewd/block-insecure-options</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/ae6a6e4b088a35c0fc7b17940722c8a515f7bee7"><code>ae6a6e4</code></a> Fix type hint on create_tag</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/5bce9b4f7fc825d8bcd450325e6dda78c49f0ca0"><code>5bce9b4</code></a> Document PushInfoList</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/f4f2658d5d308b3fb9162e50cd4c7b346e7a0a47"><code>f4f2658</code></a> Updates from review</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/9dc43926207b2205d77511c6ffd40944199f0c2d"><code>9dc4392</code></a> Submodule tests</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c8ae33b9314a7d3716827b5cb705a3cd0a2e4a46"><code>c8ae33b</code></a> More tests</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/b92f01a3a38fc8e171d08575c69de9733811faa6"><code>b92f01a</code></a> Update/add tests for Repo.clone*</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/fd2c6da5f82009398d241dc07603fbcd490ced29"><code>fd2c6da</code></a> Updates from review</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/e6108c7997f5c8f7361b982959518e982b973230"><code>e6108c7</code></a> Block unsafe options and protocols by default</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.0.2...3.1.30">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21011/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21011",
"html_url": "https://github.com/huggingface/transformers/pull/21011",
"diff_url": "https://github.com/huggingface/transformers/pull/21011.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21011.patch",
"merged_at": 1672864602000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21010
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21010/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21010/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21010/events
|
https://github.com/huggingface/transformers/pull/21010
| 1,519,534,071
|
PR_kwDOCUB6oc5Gp8qm
| 21,010
|
Bump gitpython from 3.1.18 to 3.1.30 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21010). All of your documentation changes will be reflected on that endpoint."
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.18 to 3.1.30.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>v3.1.30 - with important security fixes</h2>
<p>See <a href="https://github-redirect.dependabot.com/gitpython-developers/GitPython/issues/1515">gitpython-developers/GitPython#1515</a> for details.</p>
<h2>3.1.20</h2>
<p>No release notes provided.</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/141cd651e459bff8919798b3ccf03dfa167757f6"><code>141cd65</code></a> adjust changelog prior to release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/678a8fe08dd466fcfe8676294b52887955138960"><code>678a8fe</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/gitpython-developers/GitPython/issues/1521">#1521</a> from stsewd/block-insecure-options</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/ae6a6e4b088a35c0fc7b17940722c8a515f7bee7"><code>ae6a6e4</code></a> Fix type hint on create_tag</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/5bce9b4f7fc825d8bcd450325e6dda78c49f0ca0"><code>5bce9b4</code></a> Document PushInfoList</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/f4f2658d5d308b3fb9162e50cd4c7b346e7a0a47"><code>f4f2658</code></a> Updates from review</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/9dc43926207b2205d77511c6ffd40944199f0c2d"><code>9dc4392</code></a> Submodule tests</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c8ae33b9314a7d3716827b5cb705a3cd0a2e4a46"><code>c8ae33b</code></a> More tests</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/b92f01a3a38fc8e171d08575c69de9733811faa6"><code>b92f01a</code></a> Update/add tests for Repo.clone*</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/fd2c6da5f82009398d241dc07603fbcd490ced29"><code>fd2c6da</code></a> Updates from review</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/e6108c7997f5c8f7361b982959518e982b973230"><code>e6108c7</code></a> Block unsafe options and protocols by default</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.18...3.1.30">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21010/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21010",
"html_url": "https://github.com/huggingface/transformers/pull/21010",
"diff_url": "https://github.com/huggingface/transformers/pull/21010.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21010.patch",
"merged_at": 1672864593000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21009
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21009/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21009/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21009/events
|
https://github.com/huggingface/transformers/pull/21009
| 1,519,529,457
|
PR_kwDOCUB6oc5Gp7qe
| 21,009
|
Generate: FLAX infers pad token in its absence and has functional example
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,684
| 1,672
|
MEMBER
| null |
# What does this PR do?
Some bug fixing in advance of #21007 (PR that adds generation config to Flax), to ensure we start from a functional flax generate codebase.
In particular:
1. Flax now assumes the value `pad_token_id` when it is `None` and `eos_token_id` is not `None`, like TF and PT do. This is very helpful for open text generation examples, like with GPT2, was an open request (https://github.com/huggingface/transformers/issues/18884), and was one of the causes for failure in the existing example. This also includes the recent changes of #20727, where `eos_token_id` can be a list of tokens.
2. An `int32` type specification was missing in the special tokens -- when converted to JAX variables, JAX assumed they were `float32`;
3. The existing flax generate example is now part of our doctests, and runs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21009/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21009",
"html_url": "https://github.com/huggingface/transformers/pull/21009",
"diff_url": "https://github.com/huggingface/transformers/pull/21009.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21009.patch",
"merged_at": 1672919578000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21008
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21008/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21008/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21008/events
|
https://github.com/huggingface/transformers/pull/21008
| 1,519,502,653
|
PR_kwDOCUB6oc5Gp1wQ
| 21,008
|
Make sure dynamic objects can be saved and reloaded
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #20884
This PR makes sure that models that use the code on the Hub feature can be saved and repushed while still including the necessary code files. as reported in #20884, this was not the case previously. The fix is simple enough and the tests have been extended to test this use case.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21008/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21008/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21008",
"html_url": "https://github.com/huggingface/transformers/pull/21008",
"diff_url": "https://github.com/huggingface/transformers/pull/21008.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21008.patch",
"merged_at": 1672921825000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21007
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21007/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21007/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21007/events
|
https://github.com/huggingface/transformers/pull/21007
| 1,519,491,867
|
PR_kwDOCUB6oc5GpzZZ
| 21,007
|
Generate: FLAX uses `GenerationConfig` as the basis for `.generate()` parametrization
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
MEMBER
| null |
# What does this PR do?
Changes the FLAX side of `.generate()` such that it relies on the `GenerationConfig`. This is the FLAX equivalent of https://github.com/huggingface/transformers/pull/20388
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21007/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21007",
"html_url": "https://github.com/huggingface/transformers/pull/21007",
"diff_url": "https://github.com/huggingface/transformers/pull/21007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21007.patch",
"merged_at": 1672933297000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21006
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21006/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21006/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21006/events
|
https://github.com/huggingface/transformers/pull/21006
| 1,519,411,038
|
PR_kwDOCUB6oc5GphxS
| 21,006
|
Update PR template
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
MEMBER
| null |
Adds @MKhalusova to the PR template for documentation-related issues :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21006/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21006",
"html_url": "https://github.com/huggingface/transformers/pull/21006",
"diff_url": "https://github.com/huggingface/transformers/pull/21006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21006.patch",
"merged_at": 1672858912000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21005
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21005/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21005/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21005/events
|
https://github.com/huggingface/transformers/pull/21005
| 1,519,403,906
|
PR_kwDOCUB6oc5GpgNE
| 21,005
|
Fix callback docstrings
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Reformatted as Markdown π"
] | 1,672
| 1,672
| 1,672
|
MEMBER
| null |
Fixes #20965 where the parameters aren't properly formatted because it uses `Environment` instead of `Args` in the docstring.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21005/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21005",
"html_url": "https://github.com/huggingface/transformers/pull/21005",
"diff_url": "https://github.com/huggingface/transformers/pull/21005.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21005.patch",
"merged_at": 1672865964000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21004
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21004/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21004/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21004/events
|
https://github.com/huggingface/transformers/pull/21004
| 1,519,383,621
|
PR_kwDOCUB6oc5GpbwA
| 21,004
|
Update bug report template
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
MEMBER
| null |
Adds @MKhalusova to the bug report template for documentation-related issues.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21004/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21004/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21004",
"html_url": "https://github.com/huggingface/transformers/pull/21004",
"diff_url": "https://github.com/huggingface/transformers/pull/21004.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21004.patch",
"merged_at": 1672857196000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21003
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21003/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21003/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21003/events
|
https://github.com/huggingface/transformers/pull/21003
| 1,519,378,266
|
PR_kwDOCUB6oc5Gpamb
| 21,003
|
Generate: Fix CI related to #20727
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,684
| 1,672
|
MEMBER
| null |
# What does this PR do?
Fixes the error that showed up here: https://github.com/huggingface/transformers/actions/runs/3834635537/jobs/6527258530
Related to #20727
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21003/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21003",
"html_url": "https://github.com/huggingface/transformers/pull/21003",
"diff_url": "https://github.com/huggingface/transformers/pull/21003.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21003.patch",
"merged_at": 1672864017000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21002
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21002/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21002/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21002/events
|
https://github.com/huggingface/transformers/pull/21002
| 1,519,341,551
|
PR_kwDOCUB6oc5GpSmT
| 21,002
|
Fix (DeepSpeed) docker image build issue
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
COLLABORATOR
| null |
# What does this PR do?
Currently, the docker image build job [Latest PyTorch + DeepSpeed](https://github.com/huggingface/transformers/actions/runs/3834393836/jobs/6526754769) from time to time. The issue occurs after #20788 where `apex` is recompiled during the build. It seems a resource issue (most likely the memory issue) due to the parallel build (multiple worker). So set `MAX_JOB=1` to avoid the failure.
This will increase the build time to `1h30m`, but we have to build 2 same image (for daily CI and push CI), therefore 3h, and this is way too long. Previously those 2 images are built sequentially due to some issue, but now it seems the issue is gone and we can build them in parallel.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21002/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21002",
"html_url": "https://github.com/huggingface/transformers/pull/21002",
"diff_url": "https://github.com/huggingface/transformers/pull/21002.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21002.patch",
"merged_at": 1672864113000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21001
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21001/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21001/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21001/events
|
https://github.com/huggingface/transformers/issues/21001
| 1,519,302,104
|
I_kwDOCUB6oc5ajrXY
| 21,001
|
Ability to Finetune ZeoShot Text Classifier on a corpus
|
{
"login": "m-ali-awan",
"id": 62832721,
"node_id": "MDQ6VXNlcjYyODMyNzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/62832721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m-ali-awan",
"html_url": "https://github.com/m-ali-awan",
"followers_url": "https://api.github.com/users/m-ali-awan/followers",
"following_url": "https://api.github.com/users/m-ali-awan/following{/other_user}",
"gists_url": "https://api.github.com/users/m-ali-awan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m-ali-awan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m-ali-awan/subscriptions",
"organizations_url": "https://api.github.com/users/m-ali-awan/orgs",
"repos_url": "https://api.github.com/users/m-ali-awan/repos",
"events_url": "https://api.github.com/users/m-ali-awan/events{/privacy}",
"received_events_url": "https://api.github.com/users/m-ali-awan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@m-ali-awan , I believe they are already implemented in transformers, \r\n1. `from transformers import AutoModelForMaskedLM` for Masked Language Modelling(like your example) and \r\n2. `from transformers import AutoModelForCausalLM` for GPT like models. ",
"> @m-ali-awan , I believe they are already implemented in transformers,\r\n> \r\n> 1. `from transformers import AutoModelForMaskedLM` for Masked Language Modelling(like your example) and\r\n> 2. `from transformers import AutoModelForCausalLM` for GPT like models.\r\n\r\n@susnato \r\nThanks for helping me...\r\nBut, how can I format my corpus in the format of data required?\r\n",
"@m-ali-awan For MaskedLM you can just use the pre built class `DataCollatorForLanguageModeling` with mlm=True and mlm_probability=prob(how frequent do you want your tokens to be masked) as data_collator in Trainer class, then you can load your data using huggingface datasets and transformers will take care of all preprocessing in backend. I found a good and brief Kaggle Notebook, about this you can find it [here](https://www.kaggle.com/code/quincyqiang/maskedlm-pretrain-for-deberat-v3-large/notebook).",
"@susnato \r\nThanks a lot. So, now if I want to use BART-MNLI for ZeroShot or Few Shot, I can finetune it on my corpus, in the same way, as described in the mentioned Kaggle notebook. I will do this as MaskedLM, right(or should I treat it as CausalLM)?\r\nThen, using that fine-tuned model, I can do ZeroShot, and FewShot(using around 5-10 labeled examples).\r\n\r\n\r\nThanks again.",
"@m-ali-awan, Yes, but you may need to change the model based on your specific task/dataset, you can search online to see if you find a specific model which was trained for that specific type of task and for that specific type of dataset you are trying to use. You may want to go with variations of BERT for MLM at starting.",
"Ok thanks @susnato \r\nSo, now I will got for BART-Large-MNLI, and fine tune it as MLM with my custom corpus at first. Then, will try it out for Zero-Shot or Few Shot, and same I can follow for other models...\r\n\r\nThanks\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
### Feature request
I want to do zero-shot text classification for automotive parts, an candidate labels are around 3200. Now this needs to be done on basis of the Description.
Using pretrained Zero-Shot models like Bart-mnli etc, are not giving me good results, as they are not having much context knowledge. It would be great to finetune ZeroShot model on all descriptions corpus, as I think this will improve results a lot. I saw this approach in ULMFit by Fastai, as they train the Language model encoder by training to predict the next word, for the whole corpus. And then they use that encoder as backbone for text classifier, and once that is fine-tuned, results are better.
It can be seen here:
https://docs.fast.ai/tutorial.text.html
Thanks,
### Motivation
This way, we can provide rich context, that is required for making classifications/label tagging.
### Your contribution
I am not skilled enough, to contribute on this...
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21001/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21000
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21000/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21000/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21000/events
|
https://github.com/huggingface/transformers/pull/21000
| 1,519,252,070
|
PR_kwDOCUB6oc5Go_pn
| 21,000
|
Remove more unused attributes in config classes
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21000/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21000",
"html_url": "https://github.com/huggingface/transformers/pull/21000",
"diff_url": "https://github.com/huggingface/transformers/pull/21000.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21000.patch",
"merged_at": 1673526725000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20999
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20999/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20999/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20999/events
|
https://github.com/huggingface/transformers/pull/20999
| 1,519,232,735
|
PR_kwDOCUB6oc5Go7wC
| 20,999
|
Refactor the function get_results
|
{
"login": "milyiyo",
"id": 8120990,
"node_id": "MDQ6VXNlcjgxMjA5OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8120990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/milyiyo",
"html_url": "https://github.com/milyiyo",
"followers_url": "https://api.github.com/users/milyiyo/followers",
"following_url": "https://api.github.com/users/milyiyo/following{/other_user}",
"gists_url": "https://api.github.com/users/milyiyo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/milyiyo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/milyiyo/subscriptions",
"organizations_url": "https://api.github.com/users/milyiyo/orgs",
"repos_url": "https://api.github.com/users/milyiyo/repos",
"events_url": "https://api.github.com/users/milyiyo/events{/privacy}",
"received_events_url": "https://api.github.com/users/milyiyo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
A small refactor for the function `get_results`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20999/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20999",
"html_url": "https://github.com/huggingface/transformers/pull/20999",
"diff_url": "https://github.com/huggingface/transformers/pull/20999.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20999.patch",
"merged_at": 1672851936000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20998
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20998/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20998/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20998/events
|
https://github.com/huggingface/transformers/pull/20998
| 1,519,121,680
|
PR_kwDOCUB6oc5GojrB
| 20,998
|
Fix model hub link
|
{
"login": "idilsulo",
"id": 19615018,
"node_id": "MDQ6VXNlcjE5NjE1MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19615018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idilsulo",
"html_url": "https://github.com/idilsulo",
"followers_url": "https://api.github.com/users/idilsulo/followers",
"following_url": "https://api.github.com/users/idilsulo/following{/other_user}",
"gists_url": "https://api.github.com/users/idilsulo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idilsulo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idilsulo/subscriptions",
"organizations_url": "https://api.github.com/users/idilsulo/orgs",
"repos_url": "https://api.github.com/users/idilsulo/repos",
"events_url": "https://api.github.com/users/idilsulo/events{/privacy}",
"received_events_url": "https://api.github.com/users/idilsulo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Minor link fix on main README, fixes model hub link which directs to main Hugging Face page instead of models.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@stevhliu
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20998/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20998",
"html_url": "https://github.com/huggingface/transformers/pull/20998",
"diff_url": "https://github.com/huggingface/transformers/pull/20998.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20998.patch",
"merged_at": 1672851874000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20997
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20997/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20997/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20997/events
|
https://github.com/huggingface/transformers/issues/20997
| 1,519,105,965
|
I_kwDOCUB6oc5ai7et
| 20,997
|
Query related to finetuning bert models for QA
|
{
"login": "gokul427",
"id": 20221943,
"node_id": "MDQ6VXNlcjIwMjIxOTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/20221943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gokul427",
"html_url": "https://github.com/gokul427",
"followers_url": "https://api.github.com/users/gokul427/followers",
"following_url": "https://api.github.com/users/gokul427/following{/other_user}",
"gists_url": "https://api.github.com/users/gokul427/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gokul427/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gokul427/subscriptions",
"organizations_url": "https://api.github.com/users/gokul427/orgs",
"repos_url": "https://api.github.com/users/gokul427/repos",
"events_url": "https://api.github.com/users/gokul427/events{/privacy}",
"received_events_url": "https://api.github.com/users/gokul427/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
When we run run_qa.py on bert-base using squad data and once again fine tune it on custom data, will it retrain all the layers? or will it train only the last layer(head) freezing the other layers ? please explain the kind of finetuning done with run_qa.py
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20997/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20996
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20996/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20996/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20996/events
|
https://github.com/huggingface/transformers/issues/20996
| 1,519,099,692
|
I_kwDOCUB6oc5ai58s
| 20,996
|
Command needed to finetune bert on squad using TPU
|
{
"login": "gokul427",
"id": 20221943,
"node_id": "MDQ6VXNlcjIwMjIxOTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/20221943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gokul427",
"html_url": "https://github.com/gokul427",
"followers_url": "https://api.github.com/users/gokul427/followers",
"following_url": "https://api.github.com/users/gokul427/following{/other_user}",
"gists_url": "https://api.github.com/users/gokul427/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gokul427/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gokul427/subscriptions",
"organizations_url": "https://api.github.com/users/gokul427/orgs",
"repos_url": "https://api.github.com/users/gokul427/repos",
"events_url": "https://api.github.com/users/gokul427/events{/privacy}",
"received_events_url": "https://api.github.com/users/gokul427/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
Please provide the command to execute run_qa.py on TPU to finetune bert models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20996/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20995
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20995/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20995/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20995/events
|
https://github.com/huggingface/transformers/pull/20995
| 1,518,851,371
|
PR_kwDOCUB6oc5GnoCm
| 20,995
|
[CLIPSeg] Fix integration test
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, it uses `ViTImageProcessor` with different settings."
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
A user reported at https://github.com/timojl/clipseg/issues/18 that CLIPSeg uses the ImageNet mean + std instead of the [-1, 1] range for normalization. This PR updates the integration test as repos on the hub were fixed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20995/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20995/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20995",
"html_url": "https://github.com/huggingface/transformers/pull/20995",
"diff_url": "https://github.com/huggingface/transformers/pull/20995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20995.patch",
"merged_at": 1672925433000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20994
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20994/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20994/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20994/events
|
https://github.com/huggingface/transformers/pull/20994
| 1,518,836,860
|
PR_kwDOCUB6oc5Gnky1
| 20,994
|
Generate: TF uses `GenerationConfig` as the basis for `.generate()` parametrization
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
MEMBER
| null |
# What does this PR do?
Changes the TF side of `.generate()` such that it relies on the `GenerationConfig`. This is the TF equivalent of https://github.com/huggingface/transformers/pull/20388
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20994/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20994",
"html_url": "https://github.com/huggingface/transformers/pull/20994",
"diff_url": "https://github.com/huggingface/transformers/pull/20994.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20994.patch",
"merged_at": 1672856601000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20993
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20993/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20993/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20993/events
|
https://github.com/huggingface/transformers/pull/20993
| 1,518,819,251
|
PR_kwDOCUB6oc5Gng3F
| 20,993
|
Remove cuda dependency from Deformable-DETR
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
Removes the CUDA dependency from Deformable DETR, [OneFormer](https://github.com/huggingface/transformers/pull/20577) and [Mask2Former](https://github.com/huggingface/transformers/pull/20792) PRs also use the same multi-scale deformable attention function and eliminate the CUDA dependency.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20993/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20993",
"html_url": "https://github.com/huggingface/transformers/pull/20993",
"diff_url": "https://github.com/huggingface/transformers/pull/20993.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20993.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20992
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20992/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20992/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20992/events
|
https://github.com/huggingface/transformers/pull/20992
| 1,518,581,589
|
PR_kwDOCUB6oc5GmsVP
| 20,992
|
[CI-doc-daily] Remove RobertaPreLayernorm random tests
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
Let's wait until #20757 is merged to merge this.
The checkpoints for `RobertaPreLayerNormForMaskedLM`, `RobertaPreLayerNormForQuestionAnswering`, `RobertaPreLayerNormForTokenClassification`, where not provided thus the expected values are random and should not be tested.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20992/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20992",
"html_url": "https://github.com/huggingface/transformers/pull/20992",
"diff_url": "https://github.com/huggingface/transformers/pull/20992.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20992.patch",
"merged_at": 1673722052000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20991
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20991/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20991/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20991/events
|
https://github.com/huggingface/transformers/issues/20991
| 1,518,264,660
|
I_kwDOCUB6oc5afuFU
| 20,991
|
Support Transformer Engine and FP8 training
|
{
"login": "zhuzilin",
"id": 10428324,
"node_id": "MDQ6VXNlcjEwNDI4MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuzilin",
"html_url": "https://github.com/zhuzilin",
"followers_url": "https://api.github.com/users/zhuzilin/followers",
"following_url": "https://api.github.com/users/zhuzilin/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuzilin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuzilin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuzilin/subscriptions",
"organizations_url": "https://api.github.com/users/zhuzilin/orgs",
"repos_url": "https://api.github.com/users/zhuzilin/repos",
"events_url": "https://api.github.com/users/zhuzilin/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuzilin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"There is work ongoing to add support to it in [Accelerate](https://github.com/huggingface/accelerate/tree/fp8_integration) first. Once this is tested and merged, we will also port it to the `Trainer`. For now we are hit by a regression problem we are trying to fix with the team at Nvidia.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"The fp8 is now supported by Ada, and it is included in Accelerate. Will work continue on including fp8 in Trainer?\r\n@sgugger",
"The Trainer will soon use Accelerate, so this will come for free.",
"Pls update here once the trainer has supported it, thanks!",
"Looking forward to updates!",
"When all is function supported in transformers.TrainingArguments?"
] | 1,672
| 1,687
| 1,675
|
NONE
| null |
### Feature request
NVIDIA has proposed the FP8 tensor core along with a [Transformer Engine](https://github.com/NVIDIA/TransformerEngine) library that implements the corresponding kernels and mix precision strategies (i.e. the delay scaling strategy). I wonder if we have plan on supporting transformer engine here? This could make better use of the newest hardware.
### Motivation
Make better use of the newest hardware, especially H100.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20991/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20991/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20990
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20990/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20990/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20990/events
|
https://github.com/huggingface/transformers/issues/20990
| 1,518,148,659
|
I_kwDOCUB6oc5afRwz
| 20,990
|
System out of memory because of linear usage
|
{
"login": "Yessin111",
"id": 32096448,
"node_id": "MDQ6VXNlcjMyMDk2NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/32096448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yessin111",
"html_url": "https://github.com/Yessin111",
"followers_url": "https://api.github.com/users/Yessin111/followers",
"following_url": "https://api.github.com/users/Yessin111/following{/other_user}",
"gists_url": "https://api.github.com/users/Yessin111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yessin111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yessin111/subscriptions",
"organizations_url": "https://api.github.com/users/Yessin111/orgs",
"repos_url": "https://api.github.com/users/Yessin111/repos",
"events_url": "https://api.github.com/users/Yessin111/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yessin111/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I don't know how you want us to help without a way to reproduce the error. It looks like you're not running inference within a `torch.no_grad()` context manager, which will consume memories of activations saved for the backward pass, which might be the issue.\r\n\r\nAlso cc @alaradirik ",
"Thank you for your quick reply. This is the full script:\r\n\r\n```Python\r\nfrom transformers import OwlViTProcessor, OwlViTForObjectDetection\r\nimport json\r\nimport math\r\nimport requests\r\nimport torch\r\nfrom PIL import Image\r\n\r\ndictionary = []\r\n\r\n\r\ndef sub_question_two(part_size, part):\r\n print()\r\n print(\"-------------------------------------------------------\")\r\n print(\"Processing partition \" + str(part))\r\n print(\"-------------------------------------------------------\")\r\n global dictionary\r\n dictionary = []\r\n\r\n with open(\"data.json\", \"r\") as file_json:\r\n json_data = json.load(file_json)\r\n\r\n json_data = json_data[(part - 1) * part_size:part * part_size]\r\n\r\n for row in json_data:\r\n handler(row)\r\n\r\n write_to_file(part)\r\n\r\n\r\ndef write_to_file(part):\r\n if part == 2717:\r\n open(\"data_subquestion_two.json\", \"w\")\r\n\r\n with open(\"data_subquestion_two.json\", \"r+\") as file_json:\r\n file_json.seek(0)\r\n json.dump(dictionary, file_json, indent=4)\r\n else:\r\n with open(\"data_subquestion_two.json\", \"r+\") as file_json:\r\n file_data = json.loads(file_json.read())\r\n file_data = file_data + dictionary\r\n file_json.seek(0)\r\n json.dump(file_data, file_json, indent=4)\r\n\r\n\r\ndef handler(json_row):\r\n print(json_row[\"isbn\"])\r\n try:\r\n with Image.open(requests.get(json_row[\"library\"][\"cover\"], stream=True).raw) as response:\r\n json_row[\"library\"][\"sq2\"] = get_caption(response)\r\n except:\r\n pass\r\n\r\n try:\r\n with Image.open(requests.get(json_row[\"amazon\"][\"cover\"], stream=True).raw) as response:\r\n json_row[\"amazon\"][\"sq2\"] = get_caption(response)\r\n except:\r\n pass\r\n dictionary.append(json_row)\r\n\r\n\r\ndef get_caption(response):\r\n inputs = processor(text=texts, images=response, return_tensors=\"pt\")\r\n outputs = model(**inputs)\r\n target_sizes = torch.Tensor([response.size[::-1]])\r\n results = processor.post_process(outputs=outputs, target_sizes=target_sizes)\r\n\r\n i = 0\r\n text = texts[i]\r\n scores, labels = results[i][\"scores\"], results[i][\"labels\"]\r\n\r\n zipped = zip(scores, labels)\r\n better_zip = {}\r\n found = []\r\n for el in zipped:\r\n if el[1] not in found:\r\n found.append(el[1])\r\n better_zip[text[el[1]]] = round(el[0].item(), 3)\r\n else:\r\n if better_zip[text[el[1]]] < round(el[0].item(), 3):\r\n better_zip[text[el[1]]] = round(el[0].item(), 3)\r\n\r\n return dict(sorted(better_zip.items(), key=lambda item: item[1], reverse=True))\r\n\r\n\r\npartition_size = 25\r\nmax_partition_sq2 = math.ceil(93662 / partition_size) + 1\r\n\r\ntexts = [['an illustration of a tortoise', 'an illustration of a magpie', 'an illustration of a sea turtle', 'an illustration of a general football', 'an illustration of a ambulance', 'an illustration of a ladder', 'an illustration of a toothbrush', 'an illustration of a syringe', 'an illustration of a sink', 'an illustration of a toy', 'an illustration of a organ', 'an illustration of a apple', 'an illustration of a eye', 'an illustration of a cosmetics', 'an illustration of a paddle', 'an illustration of a snowman', 'an illustration of a beer', 'an illustration of a chopsticks', 'an illustration of a beard', 'an illustration of a bird', 'an illustration of a traffic light', 'an illustration of a croissant', 'an illustration of a cucumber', 'an illustration of a radish', 'an illustration of a towel', 'an illustration of a doll', 'an illustration of a skull', 'an illustration of a washing machine', 'an illustration of a glove', 'an illustration of a belt', 'an illustration of a sunglasses', 'an illustration of a banjo', 'an illustration of a cart', 'an illustration of a ball', 'an illustration of a backpack', 'an illustration of a bike', 'an illustration of a home appliance', 'an illustration of a centipede', 'an illustration of a boat', 'an illustration of a surfboard', 'an illustration of a boot', 'an illustration of a headphones', 'an illustration of a hot dog', 'an illustration of a shorts', 'an illustration of a fast food', 'an illustration of a bus', 'an illustration of a boy', 'an illustration of a bicycle wheel', 'an illustration of a barge', 'an illustration of a laptop', 'an illustration of a miniskirt', 'an illustration of a drill', 'an illustration of a dress', 'an illustration of a bear', 'an illustration of a waffle', 'an illustration of a pancake', 'an illustration of a brown bear', 'an illustration of a woodpecker', 'an illustration of a blue jay', 'an illustration of a pretzel', 'an illustration of a bagel', 'an illustration of a tower', 'an illustration of a teapot', 'an illustration of a person', 'an illustration of a bow and arrow', 'an illustration of a swimwear', 'an illustration of a beehive', 'an illustration of a brassiere', 'an illustration of a bee', 'an illustration of a bat', 'an illustration of a starfish', 'an illustration of a popcorn', 'an illustration of a burrito', 'an illustration of a chainsaw', 'an illustration of a balloon', 'an illustration of a tent', 'an illustration of a licence plate', 'an illustration of a lantern', 'an illustration of a flashlight', 'an illustration of a billboard', 'an illustration of a tiara', 'an illustration of a limousine', 'an illustration of a necklace', 'an illustration of a carnivore', 'an illustration of a scissors', 'an illustration of a stairs', 'an illustration of a computer keyboard', 'an illustration of a printer', 'an illustration of a traffic sign', 'an illustration of a chair', 'an illustration of a shirt', 'an illustration of a poster', 'an illustration of a cheese', 'an illustration of a sock', 'an illustration of a fire hydrant', 'an illustration of a land vehicle', 'an illustration of a earrings', 'an illustration of a tie', 'an illustration of a watercraft', 'an illustration of a cabinetry', 'an illustration of a suitcase', 'an illustration of a muffin', 'an illustration of a bidet', 'an illustration of a snack', 'an illustration of a snowmobile', 'an illustration of a clock', 'an illustration of a medical equipment', 'an illustration of a cattle', 'an illustration of a cello', 'an illustration of a jet ski', 'an illustration of a camel', 'an illustration of a coat', 'an illustration of a suit', 'an illustration of a desk', 'an illustration of a cat', 'an illustration of a bronze sculpture', 'an illustration of a juice', 'an illustration of a gondola', 'an illustration of a beetle', 'an illustration of a cannon', 'an illustration of a mouse', 'an illustration of a cookie', 'an illustration of a office', 'an illustration of a fountain', 'an illustration of a coin', 'an illustration of a calculator', 'an illustration of a cocktail', 'an illustration of a computer monitor', 'an illustration of a box', 'an illustration of a christmas tree', 'an illustration of a cowboy hat', 'an illustration of a hiking equipment', 'an illustration of a studio couch', 'an illustration of a drum', 'an illustration of a dessert', 'an illustration of a wine rack', 'an illustration of a drink', 'an illustration of a zucchini', 'an illustration of a ladle', 'an illustration of a mouth', 'an illustration of a dairy', 'an illustration of a dice', 'an illustration of a oven', 'an illustration of a dinosaur', 'an illustration of a couch', 'an illustration of a cricket ball', 'an illustration of a winter melon', 'an illustration of a whiteboard', 'an illustration of a door', 'an illustration of a hat', 'an illustration of a shower', 'an illustration of a fedora', 'an illustration of a guacamole', 'an illustration of a dagger', 'an illustration of a scarf', 'an illustration of a dolphin', 'an illustration of a sombrero', 'an illustration of a tin can', 'an illustration of a mug', 'an illustration of a tap', 'an illustration of a harbor seal', 'an illustration of a stretcher', 'an illustration of a goggles', 'an illustration of a human body', 'an illustration of a roller skates', 'an illustration of a coffee cup', 'an illustration of a cutting board', 'an illustration of a blender', 'an illustration of a plumbing fixture', 'an illustration of a stop sign', 'an illustration of a office supplies', 'an illustration of a volleyball', 'an illustration of a vase', 'an illustration of a slow cooker', 'an illustration of a wardrobe', 'an illustration of a coffee', 'an illustration of a paper towel', 'an illustration of a personal care', 'an illustration of a food', 'an illustration of a sun hat', 'an illustration of a tree house', 'an illustration of a skirt', 'an illustration of a gas stove', 'an illustration of a salt and pepper shakers', 'an illustration of a mechanical fan', 'an illustration of a fruit', 'an illustration of a french fries', 'an illustration of a nightstand', 'an illustration of a barrel', 'an illustration of a kite', 'an illustration of a tart', 'an illustration of a treadmill', 'an illustration of a fox', 'an illustration of a flag', 'an illustration of a horn', 'an illustration of a window blind', 'an illustration of a foot', 'an illustration of a golf cart', 'an illustration of a jacket', 'an illustration of a egg', 'an illustration of a street light', 'an illustration of a guitar', 'an illustration of a pillow', 'an illustration of a leg', 'an illustration of a isopod', 'an illustration of a grape', 'an illustration of a ear', 'an illustration of a power plugs and sockets', 'an illustration of a panda', 'an illustration of a giraffe', 'an illustration of a woman', 'an illustration of a door handle', 'an illustration of a rhinoceros', 'an illustration of a bathtub', 'an illustration of a goldfish', 'an illustration of a houseplant', 'an illustration of a goat', 'an illustration of a baseball bat', 'an illustration of a baseball glove', 'an illustration of a mixing bowl', 'an illustration of a marine invertebrates', 'an illustration of a kitchen utensil', 'an illustration of a light switch', 'an illustration of a house', 'an illustration of a horse', 'an illustration of a stationary bicycle', 'an illustration of a ceiling fan', 'an illustration of a sofa bed', 'an illustration of a harp', 'an illustration of a sandal', 'an illustration of a bicycle helmet', 'an illustration of a saucer', 'an illustration of a harpsichord', 'an illustration of a hair', 'an illustration of a hamster', 'an illustration of a curtain', 'an illustration of a bed', 'an illustration of a kettle', 'an illustration of a fireplace', 'an illustration of a scale', 'an illustration of a drinking straw', 'an illustration of a insect', 'an illustration of a invertebrate', 'an illustration of a food processor', 'an illustration of a bookcase', 'an illustration of a refrigerator', 'an illustration of a wood-burning stove', 'an illustration of a punching bag', 'an illustration of a common fig', 'an illustration of a jaguar', 'an illustration of a golf ball', 'an illustration of a fashion accessory', 'an illustration of a alarm clock', 'an illustration of a filing cabinet', 'an illustration of a artichoke', 'an illustration of a table', 'an illustration of a tableware', 'an illustration of a kangaroo', 'an illustration of a koala', 'an illustration of a knife', 'an illustration of a bottle', 'an illustration of a lynx', 'an illustration of a lavender', 'an illustration of a lighthouse', 'an illustration of a dumbbell', 'an illustration of a head', 'an illustration of a bowl', 'an illustration of a porch', 'an illustration of a lizard', 'an illustration of a billiard table', 'an illustration of a mammal', 'an illustration of a mouse', 'an illustration of a motorcycle', 'an illustration of a musical instrument', 'an illustration of a swim cap', 'an illustration of a frying pan', 'an illustration of a snowplow', 'an illustration of a bathroom cabinet', 'an illustration of a missile', 'an illustration of a bust', 'an illustration of a man', 'an illustration of a milk', 'an illustration of a plate', 'an illustration of a mobile phone', 'an illustration of a baked goods', 'an illustration of a mushroom', 'an illustration of a pitcher', 'an illustration of a mirror', 'an illustration of a lifejacket', 'an illustration of a table tennis racket', 'an illustration of a musical keyboard', 'an illustration of a scoreboard', 'an illustration of a briefcase', 'an illustration of a kitchen knife', 'an illustration of a tennis ball', 'an illustration of a plastic bag', 'an illustration of a oboe', 'an illustration of a chest of drawers', 'an illustration of a ostrich', 'an illustration of a piano', 'an illustration of a girl', 'an illustration of a plant', 'an illustration of a potato', 'an illustration of a sports equipment', 'an illustration of a pasta', 'an illustration of a penguin', 'an illustration of a pumpkin', 'an illustration of a pear', 'an illustration of a infant bed', 'an illustration of a polar bear', 'an illustration of a mixer', 'an illustration of a cupboard', 'an illustration of a jacuzzi', 'an illustration of a pizza', 'an illustration of a digital clock', 'an illustration of a pig', 'an illustration of a reptile', 'an illustration of a rifle', 'an illustration of a lipstick', 'an illustration of a skateboard', 'an illustration of a raven', 'an illustration of a high heels', 'an illustration of a red panda', 'an illustration of a rose', 'an illustration of a rabbit', 'an illustration of a sculpture', 'an illustration of a saxophone', 'an illustration of a shotgun', 'an illustration of a seafood', 'an illustration of a submarine sandwich', 'an illustration of a snowboard', 'an illustration of a sword', 'an illustration of a picture frame', 'an illustration of a sushi', 'an illustration of a loveseat', 'an illustration of a ski', 'an illustration of a squirrel', 'an illustration of a tripod', 'an illustration of a stethoscope', 'an illustration of a submarine', 'an illustration of a scorpion', 'an illustration of a segway', 'an illustration of a bench', 'an illustration of a snake', 'an illustration of a coffee table', 'an illustration of a skyscraper', 'an illustration of a sheep', 'an illustration of a television', 'an illustration of a trombone', 'an illustration of a tea', 'an illustration of a tank', 'an illustration of a taco', 'an illustration of a telephone', 'an illustration of a tiger', 'an illustration of a strawberry', 'an illustration of a trumpet', 'an illustration of a tree', 'an illustration of a tomato', 'an illustration of a train', 'an illustration of a tool', 'an illustration of a picnic basket', 'an illustration of a trousers', 'an illustration of a bowling equipment', 'an illustration of a football helmet', 'an illustration of a truck', 'an illustration of a coffeemaker', 'an illustration of a violin', 'an illustration of a vehicle', 'an illustration of a handbag', 'an illustration of a wine', 'an illustration of a weapon', 'an illustration of a wheel', 'an illustration of a worm', 'an illustration of a wok', 'an illustration of a whale', 'an illustration of a zebra', 'an illustration of a auto part', 'an illustration of a jug', 'an illustration of a cream', 'an illustration of a monkey', 'an illustration of a lion', 'an illustration of a bread', 'an illustration of a platter', 'an illustration of a chicken', 'an illustration of a eagle', 'an illustration of a helicopter', 'an illustration of a owl', 'an illustration of a duck', 'an illustration of a turtle', 'an illustration of a hippopotamus', 'an illustration of a crocodile', 'an illustration of a toilet', 'an illustration of a toilet paper', 'an illustration of a squid', 'an illustration of a clothing', 'an illustration of a footwear', 'an illustration of a lemon', 'an illustration of a spider', 'an illustration of a deer', 'an illustration of a frog', 'an illustration of a banana', 'an illustration of a rocket', 'an illustration of a wine glass', 'an illustration of a countertop', 'an illustration of a tablet computer', 'an illustration of a waste container', 'an illustration of a swimming pool', 'an illustration of a dog', 'an illustration of a book', 'an illustration of a elephant', 'an illustration of a shark', 'an illustration of a candle', 'an illustration of a leopard', 'an illustration of a porcupine', 'an illustration of a flower', 'an illustration of a canary', 'an illustration of a cheetah', 'an illustration of a palm tree', 'an illustration of a hamburger', 'an illustration of a maple', 'an illustration of a building', 'an illustration of a fish', 'an illustration of a lobster', 'an illustration of a asparagus', 'an illustration of a furniture', 'an illustration of a hedgehog', 'an illustration of a airplane', 'an illustration of a spoon', 'an illustration of a otter', 'an illustration of a bull', 'an illustration of a oyster', 'an illustration of a convenience store', 'an illustration of a bench', 'an illustration of a ice cream', 'an illustration of a caterpillar', 'an illustration of a butterfly', 'an illustration of a parachute', 'an illustration of a orange', 'an illustration of a antelope', 'an illustration of a moths and butterflies', 'an illustration of a window', 'an illustration of a closet', 'an illustration of a castle', 'an illustration of a jellyfish', 'an illustration of a goose', 'an illustration of a mule', 'an illustration of a swan', 'an illustration of a peach', 'an illustration of a seat belt', 'an illustration of a raccoon', 'an illustration of a fork', 'an illustration of a lamp', 'an illustration of a camera', 'an illustration of a squash', 'an illustration of a racket', 'an illustration of a face', 'an illustration of a arm', 'an illustration of a vegetable', 'an illustration of a unicycle', 'an illustration of a falcon', 'an illustration of a snail', 'an illustration of a shellfish', 'an illustration of a cabbage', 'an illustration of a carrot', 'an illustration of a mango', 'an illustration of a jeans', 'an illustration of a flowerpot', 'an illustration of a pineapple', 'an illustration of a drawer', 'an illustration of a stool', 'an illustration of a envelope', 'an illustration of a cake', 'an illustration of a dragonfly', 'an illustration of a sunflower', 'an illustration of a microwave oven', 'an illustration of a honeycomb', 'an illustration of a marine mammal', 'an illustration of a sea lion', 'an illustration of a ladybug', 'an illustration of a shelf', 'an illustration of a watch', 'an illustration of a candy', 'an illustration of a salad', 'an illustration of a parrot', 'an illustration of a handgun', 'an illustration of a sparrow', 'an illustration of a van', 'an illustration of a spice rack', 'an illustration of a light bulb', 'an illustration of a corded phone', 'an illustration of a sports uniform', 'an illustration of a tennis racket', 'an illustration of a wall clock', 'an illustration of a serving tray', 'an illustration of a kitchen & dining room table', 'an illustration of a dog bed', 'an illustration of a cake stand', 'an illustration of a bathroom accessory', 'an illustration of a kitchen appliance', 'an illustration of a tire', 'an illustration of a ruler', 'an illustration of a luggage and bags', 'an illustration of a microphone', 'an illustration of a broccoli', 'an illustration of a umbrella', 'an illustration of a pastry', 'an illustration of a grapefruit', 'an illustration of a animal', 'an illustration of a bell pepper', 'an illustration of a turkey', 'an illustration of a lily', 'an illustration of a pomegranate', 'an illustration of a doughnut', 'an illustration of a glasses', 'an illustration of a nose', 'an illustration of a pen', 'an illustration of a ant', 'an illustration of a car', 'an illustration of a aircraft', 'an illustration of a hand', 'an illustration of a teddy bear', 'an illustration of a watermelon', 'an illustration of a cantaloupe', 'an illustration of a dishwasher', 'an illustration of a flute', 'an illustration of a balance beam', 'an illustration of a sandwich', 'an illustration of a shrimp', 'an illustration of a sewing machine', 'an illustration of a binoculars', 'an illustration of a rays and skates', 'an illustration of a ipod', 'an illustration of a accordion', 'an illustration of a willow', 'an illustration of a crab', 'an illustration of a crown', 'an illustration of a seahorse', 'an illustration of a perfume', 'an illustration of a alpaca', 'an illustration of a taxi', 'an illustration of a canoe', 'an illustration of a remote control', 'an illustration of a wheelchair', 'an illustration of a rugby ball', 'an illustration of a helmet']]\r\n\r\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\r\n\r\nfor partition in range(2717, max_partition_sq2):\r\n sub_question_two(partition_size, partition)\r\n```\r\n\r\ndata.json contains the urls to the images. I will look into torch.no_grad().",
"Quick update. It looks like the ```torch.no_grad()``` function did the trick! Adding it after the ```get_caption(response)``` call made it so the memory usage remains stable at around 12G memory usage.\r\nI can't thank you enough, I've been up trying to fix this since I created this issue and will now sleep."
] | 1,672
| 1,672
| 1,672
|
NONE
| null |
### System Info
Hi, hope this is the correct way to address my issue.
When running a model to identify objects in images, memory uses keeps rising until my system can't handle any more which usually happens after +- 15 images. I think I narrowed it down to the
```python
outputs = model(**inputs)
```
variable declaration, as removing this for testing purposes gets rid of the memory increases.
I'll paste relevant parts of my code in the _Reproduction_ box.
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
texts = [['an illustration of ...']] # texts has about 500 prompts it looks for
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
for partition in range(1, max_partition): # my code detects 25 images at a time, then writes the results to file
detect(25, partition)
# it then gets the urls from a json file and gets the images as a response, after which it does the following:
def get(response):
inputs = processor(text=texts, images=response, return_tensors="pt")
outputs = model(**inputs)
target_sizes = torch.Tensor([response.size[::-1]])
results = processor.post_process(outputs=outputs, target_sizes=target_sizes)
i = 0
text = texts[i]
scores, labels = results[i]["scores"], results[i]["labels"]
return zip(scores, labels)
```
I hope this explains it well. If need be I can also supply the actual Python file.
### Expected behavior
Not crashing my system after 2 minutes of runtime.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20990/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20989
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20989/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20989/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20989/events
|
https://github.com/huggingface/transformers/pull/20989
| 1,517,736,314
|
PR_kwDOCUB6oc5Gjxwj
| 20,989
|
Fix race condition on cleaning checkpoints when save_total_limit set to 1
|
{
"login": "radcheb",
"id": 5963615,
"node_id": "MDQ6VXNlcjU5NjM2MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5963615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/radcheb",
"html_url": "https://github.com/radcheb",
"followers_url": "https://api.github.com/users/radcheb/followers",
"following_url": "https://api.github.com/users/radcheb/following{/other_user}",
"gists_url": "https://api.github.com/users/radcheb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/radcheb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/radcheb/subscriptions",
"organizations_url": "https://api.github.com/users/radcheb/orgs",
"repos_url": "https://api.github.com/users/radcheb/repos",
"events_url": "https://api.github.com/users/radcheb/events{/privacy}",
"received_events_url": "https://api.github.com/users/radcheb/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"It's you to thank @sgugger for you quick review. I've just fixed the style and pushed."
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes #20988 by testing whether the worker process is allowed to save (`self.args.should_save` is set to True).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20988
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- trainer: @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20989/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20989",
"html_url": "https://github.com/huggingface/transformers/pull/20989",
"diff_url": "https://github.com/huggingface/transformers/pull/20989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20989.patch",
"merged_at": 1672776972000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20988
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20988/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20988/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20988/events
|
https://github.com/huggingface/transformers/issues/20988
| 1,517,735,455
|
I_kwDOCUB6oc5ads4f
| 20,988
|
[Multi-node setup] Race condition on deleting checkpoint when using shared filesystem and save_total_limit=1
|
{
"login": "radcheb",
"id": 5963615,
"node_id": "MDQ6VXNlcjU5NjM2MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5963615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/radcheb",
"html_url": "https://github.com/radcheb",
"followers_url": "https://api.github.com/users/radcheb/followers",
"following_url": "https://api.github.com/users/radcheb/following{/other_user}",
"gists_url": "https://api.github.com/users/radcheb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/radcheb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/radcheb/subscriptions",
"organizations_url": "https://api.github.com/users/radcheb/orgs",
"repos_url": "https://api.github.com/users/radcheb/repos",
"events_url": "https://api.github.com/users/radcheb/events{/privacy}",
"received_events_url": "https://api.github.com/users/radcheb/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
### System Info
When running training on multi-node setup with a shared filesystem (shared PVC on Kubernetes). W use the following configuration (Full example on Reproduction section) :
```python
load_best_model_at_end=True,
save_on_each_node=False,
save_total_limit=1,
```
When the training is finished over all epochs, it fails with FileNotFoundError with random file. It seems all the workers are trying to delete the same files when we set `save_total_limit=1`. This is causing whole training script to fail:
```bash
FileNotFoundError: [Errno 2] No such file or directory: 'rng_state_1.pth'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 7796)
...
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
``
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I created the following python script `trainer_bug.py`, it runs **GLUE** `cola` training task on a small sample of data:
```python
# pip install transformers==4.25.1 datasets==2.8.0 torch==1.13.1 scipy scikit-learn
import numpy as np
from datasets import load_dataset, load_metric
from transformers import AutoTokenizer
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer
task = "cola"
model_checkpoint = "distilbert-base-uncased"
num_labels = 2
batch_size = 2
metric_name = "matthews_correlation"
validation_key = "validation"
SAMPLE_N_ROWS = 10
if __name__ == "__main__":
dataset = load_dataset("glue", task)
for split in dataset:
dataset[split] = dataset[split].select(range(SAMPLE_N_ROWS))
metric = load_metric('glue', task)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
def preprocess_function(examples):
return tokenizer(examples["sentence"], truncation=True)
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return metric.compute(predictions=predictions, references=labels)
encoded_dataset = dataset.map(preprocess_function, batched=True)
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels)
model_name = model_checkpoint.split("/")[-1]
args = TrainingArguments(
f"{model_name}-finetuned-{task}",
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=3,
weight_decay=0.01,
report_to="none",
metric_for_best_model=metric_name,
overwrite_output_dir=True,
load_best_model_at_end=True,
log_on_each_node=False,
save_on_each_node=False,
save_total_limit=1,
# For a distributed CPU setup
no_cuda=True,
xpu_backend="gloo",
)
trainer = Trainer(
model,
args,
train_dataset=encoded_dataset["train"],
eval_dataset=encoded_dataset[validation_key],
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
trainer.train()
```
And then run it with this script `trainer_bug.sh` to simulate 2 nodes setup on CPUs:
```bash
WORLD_SIZE=2
PROC_PER_NODE=1
MASTER_HOSTNAME=localhost
MASTER_PORT=12345
# Run worker
RANK=1
CUDA_VISIBLE_DEVICES="" torchrun --nnodes=$WORLD_SIZE --nproc_per_node=$PROC_PER_NODE \
--node_rank=$RANK --master_addr=$MASTER_HOSTNAME \
--master_port=$MASTER_PORT \
trainer_bug.py &
# Run master
RANK=0
CUDA_VISIBLE_DEVICES="" torchrun --nnodes=$WORLD_SIZE --nproc_per_node=$PROC_PER_NODE \
--node_rank=$RANK --master_addr=$MASTER_HOSTNAME \
--master_port=$MASTER_PORT \
trainer_bug.py
```
### Expected behavior
The training is expected to finish successfully.
However it fails with the following stack trace:
```bash
Loading best model from distilbert-base-uncased-finetuned-cola/checkpoint-3 (score: 0.0).
{'train_runtime': 24.6088, 'train_samples_per_second': 1.219, 'train_steps_per_second': 0.366, 'train_loss': 0.5689484278361002, 'epoch': 3.0}{'train_runtime': 24.6164, 'train_samples_per_second': 1.219, 'train_steps_per_second': 0.366, 'train_loss': 0.5813997056749132, 'epoch': 3.0}
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 9/9 [00:24<00:00, 1.83s/it]
Deleting older checkpoint [distilbert-base-uncased-finetuned-cola/checkpoint-9] due to args.save_total_limit
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 9/9 [00:24<00:00, 2.74s/it]
Traceback (most recent call last):
File "trainer_bug.py", line 66, in <module>
trainer.train()
File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/transformers/trainer.py", line 1527, in train
return inner_training_loop(
File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/transformers/trainer.py", line 1920, in _inner_training_loop
shutil.rmtree(checkpoint)
File "/home/XXX/.pyenv/versions/3.8.13/lib/python3.8/shutil.py", line 718, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/home/XXX/.pyenv/versions/3.8.13/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/home/XXX/.pyenv/versions/3.8.13/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd
os.unlink(entry.name, dir_fd=topfd)
FileNotFoundError: [Errno 2] No such file or directory: 'rng_state_1.pth'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 7796) of binary: /home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/bin/python
Traceback (most recent call last):
File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
trainer_bug.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-01-03_18:28:49
host : XXXXXX
rank : 1 (local_rank: 0)
exitcode : 1 (pid: 7796)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20988/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20987
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20987/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20987/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20987/events
|
https://github.com/huggingface/transformers/issues/20987
| 1,517,589,772
|
I_kwDOCUB6oc5adJUM
| 20,987
|
Hugging Face Dies Silently when Memory insufficient for loading Model / Training Model
|
{
"login": "courtneysprouse",
"id": 25102613,
"node_id": "MDQ6VXNlcjI1MTAyNjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/25102613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/courtneysprouse",
"html_url": "https://github.com/courtneysprouse",
"followers_url": "https://api.github.com/users/courtneysprouse/followers",
"following_url": "https://api.github.com/users/courtneysprouse/following{/other_user}",
"gists_url": "https://api.github.com/users/courtneysprouse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/courtneysprouse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/courtneysprouse/subscriptions",
"organizations_url": "https://api.github.com/users/courtneysprouse/orgs",
"repos_url": "https://api.github.com/users/courtneysprouse/repos",
"events_url": "https://api.github.com/users/courtneysprouse/events{/privacy}",
"received_events_url": "https://api.github.com/users/courtneysprouse/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"If you have insufficient GPU memory, you will get the PyTorch error. For RAM issues, I don't think there is anything that exists to issue the same errors.",
"I was running on CPU. I know I've gotten the pytorch errors on GPU. If nothing exists that's alright. Just thought it would be nice to get an error message so you could more easily see what was going on, particularly when you're just loading a model for inferencing, which is often done on cpu. "
] | 1,672
| 1,672
| 1,672
|
NONE
| null |
Currently, when you load a model into memory that is too large or if you try to train a model with insufficient memory. The process gets killed without an error message. It's a bit tough to track down what is going on as a result. I'm wondering if you can add an error message similar to pytorch when you have insufficient memory to run a given process?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20987/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20986
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20986/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20986/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20986/events
|
https://github.com/huggingface/transformers/pull/20986
| 1,517,587,084
|
PR_kwDOCUB6oc5GjRh8
| 20,986
|
Fix for LXMERT
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
COLLABORATOR
| null |
# What does this PR do?
While continuing to remove unused attributes in config classes, it seems `LxmertConfig.visual_feat_loss` is not used by mistake.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20986/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20986",
"html_url": "https://github.com/huggingface/transformers/pull/20986",
"diff_url": "https://github.com/huggingface/transformers/pull/20986.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20986.patch",
"merged_at": 1672762612000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.