url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
โ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
โ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/17271
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17271/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17271/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17271/events
|
https://github.com/huggingface/transformers/pull/17271
| 1,236,674,965
|
PR_kwDOCUB6oc432q09
| 17,271
|
Add TFData2VecVision for semantic segmentation
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks a lot for your PR! Note that on the pyramid pooling class, even if we change the PyTorch class to not subclass `ModuleList` anymore, it will still need to keep the same weight names, otherwise compatibility with any checkpoint on the Hub will be broken.\r\n\r\nAbsolutely. ",
"@Rocketknight1 a gentle ping ๐",
"Ah, I'm sorry! Will review it by tomorrow.",
"Hi, I just took a look over this! I suspect the issue with the tests is that there's something like a layer name collision when saving. In h5 files, weights are saved as 'datasets' , so this error is telling us that the weights are not uniquely named - the same 'dataset' name is being written to twice during saving, which means two layers share the same name.",
"Yes, I suspected something similar but couldn't figure out where the duplicate is coming from. Do you have any suggestions?\r\n\r\n@Rocketknight1 ",
"I suspect the issue is most likely related to the implementation of AdaptiveAvgPool I wrote - the practice of precomputing a constant sparse matrix like that is non-standard, and TF might be trying to save that Tensor somehow. Can you try replacing it with a 'dummy' layer that has the same output shape and seeing if the error goes away? If so, I can work on a different implementation for the layer - I have some ideas that I think will improve performance a lot, and they might also resolve the problem too.",
"> Can you try replacing it with a 'dummy' layer that has the same output shape and seeing if the error goes away? \r\n\r\nSure. I will do it and get back. ",
"@Rocketknight1 this is what I did:\r\n\r\nhttps://github.com/sayakpaul/transformers/blob/f9292cf2c47baf7eb264c98c6189ae503930130f/src/transformers/models/data2vec/modeling_tf_data2vec_vision.py#L1102\r\n\r\nSame issue. ",
"@sayakpaul I used post-mortem debugging to isolate this - just add this to `TFData2VecVisionModelTest`:\r\n\r\n```\r\n def test_save_load(self):\r\n try:\r\n super().test_save_load()\r\n except:\r\n import pdb\r\n pdb.post_mortem()\r\n```\r\n\r\nThen run the tests with `pytest --capture=no`. This will break into a debugger at the point of failure, and you can step up to the calling frame with `(u)p`.\r\n\r\nFrom there, I can tell that the offending array has name `kernel:0` with shape `(1, 1, 32, 32)`, though I couldn't figure out exactly where it was. Is there a 1x1 conv2D in your code that maps 32 filters to 32 filters?",
"> From there, I can tell that the offending array has name kernel:0 with shape (1, 1, 32, 32), though I couldn't figure out exactly where it was. Is there a 1x1 conv2D in your code that maps 32 filters to 32 filters?\r\n\r\nThere are multiple 1x1 convs, yes. ",
"> Then run the tests with pytest --capture=no. This will break into a debugger at the point of failure, and you can step up to the calling frame with (u)p.\r\n\r\nCould you elaborate a bit more here? I have added the `pdb` snippet into the model tester code. Then I ran `RUN_SLOW=1 python -m pytest --capture=no tests/models/data2vec/test_modeling_tf_data2vec_vision.py`. I do get the pdb prompt and I get to `-> super().test_save_load()` as the oldest frame. \r\n\r\n@Rocketknight1 ",
"@sayakpaul I stepped up to the frame of `dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)`. This let me inspect the variable `name` and the `group`, but I didn't understand `h5py` well enough to figure out the exact weight causing the issue.",
"@Rocketknight1 I looked into the layers with `kernel_size=1` and tried to fix their names to use something that's suffixed with identifiers. You can find the commit [here](https://github.com/sayakpaul/transformers/commit/8ccf88bf6bcc054307faf58e9ca2b21e04c6e60b).\r\n\r\nIt still didn't resolve the issue. The only potential suspect I could find is the following. There are two layers namely `classifier` in `TFData2VecVisionForSemanticSegmentation` that are added via `TFData2VecVisionUperHead` and `TFData2VecVisionFCNHead` respectively. \r\n\r\nThoughts? ",
"Update:\r\n\r\nWith @Rocketknight1's help, I was able to resolve the current test failure (commit [here](https://github.com/sayakpaul/transformers/blob/fix/tf-data2vec-seg/src/transformers/models/data2vec/modeling_tf_data2vec_vision.py)). But I have run into two more failures which I am currently discussing with @Rocketknight1. He's on vacation. Once he gets back, hopefully, will be able to report back with updates. "
] | 1,652
| 1,654
| 1,654
|
MEMBER
| null |
This PR introduces `TFData2VecVisionForSemanticSegmentation` which takes the `TFData2VecVisionMainLayer` and appends the necessary layers for performing semantic segmentation along with loss computation (first one in this line?).
**Notes**
* Thanks to @Rocketknight1 who implemented the adaptive average pooling layer.
* Currently the model saving the tests (2 tests) are failing as soon as `TFData2VecVisionForSemanticSegmentation` class is introduced to `tests/models/test_modeling_tf_data2vec_vision.py`. Without that class, the test runs as expected. I would appreciate any help.
* As per discussed over Slack, [this class](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_vision.py#L882) should never have been subclassed from `nn.ModuleList`. It is currently leading a few idiosyncracies on the TF side (mainly related to naming of the layers). Once that is sorted out we can again revisit this `TFData2VecVisionForSemanticSegmentation` class and make the amends if needed. Happy to take the charge then.
* I ran the tests locally with the following command: `RUN_SLOW=1 python -m pytest tests/models/data2vec/test_modeling_tf_data2vec_vision.py`.
Here's the trace of the errors from running tests:
```
model = model_class(config)
model(self._prepare_for_class(inputs_dict, model_class)) # Model must be called before saving.
# Let's load it from the disk to be sure we can use pretrained weights
with tempfile.TemporaryDirectory() as tmpdirname:
> model.save_pretrained(tmpdirname, saved_model=False)
tests/test_modeling_tf_common.py:693:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/modeling_tf_utils.py:1513: in save_pretrained
self.save_weights(output_model_file)
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/keras/utils/traceback_utils.py:67: in error_handler
raise e.with_traceback(filtered_tb) from None
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/h5py/_hl/group.py:149: in create_dataset
dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/h5py/_hl/dataset.py:142: in make_new_dset
dset_id = h5d.create(parent.id, name, tid, sid, dcpl=dcpl)
h5py/_objects.pyx:54: in h5py._objects.with_phil.wrapper
???
h5py/_objects.pyx:55: in h5py._objects.with_phil.wrapper
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E ValueError: Unable to create dataset (name already exists)
h5py/h5d.pyx:87: ValueError
...
outputs = model(self._prepare_for_class(inputs_dict, model_class))
with tempfile.TemporaryDirectory() as tmpdirname:
> model.save_pretrained(tmpdirname, saved_model=False)
tests/test_modeling_tf_common.py:175:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/modeling_tf_utils.py:1513: in save_pretrained
self.save_weights(output_model_file)
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/keras/utils/traceback_utils.py:67: in error_handler
raise e.with_traceback(filtered_tb) from None
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/h5py/_hl/group.py:149: in create_dataset
dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/h5py/_hl/dataset.py:142: in make_new_dset
dset_id = h5d.create(parent.id, name, tid, sid, dcpl=dcpl)
h5py/_objects.pyx:54: in h5py._objects.with_phil.wrapper
???
h5py/_objects.pyx:55: in h5py._objects.with_phil.wrapper
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E ValueError: Unable to create dataset (name already exists)
h5py/h5d.pyx:87: ValueError
-------------------------------
```
Additionally, here's a little code for testing the segmentation class:
```py
from PIL import Image
import tensorflow as tf
from src.transformers.models.data2vec.modeling_tf_data2vec_vision import (
TFData2VecVisionForSemanticSegmentation
)
from transformers import BeitFeatureExtractor
def prepare_img():
image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png")
return image
feature_extractor = BeitFeatureExtractor.from_pretrained(
"facebook/data2vec-vision-base-ft1k"
)
model = TFData2VecVisionForSemanticSegmentation.from_pretrained(
"facebook/data2vec-vision-base",
)
image = prepare_img()
inputs = feature_extractor(images=image, return_tensors="tf")
batch_size, num_channels, height, width = inputs["pixel_values"].shape
inputs["labels"] = tf.zeros((batch_size, height, width))
outputs = model(**inputs)
print(outputs.logits.shape)
print(outputs.loss.shape)
```
@Rocketknight1 @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17271/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17271",
"html_url": "https://github.com/huggingface/transformers/pull/17271",
"diff_url": "https://github.com/huggingface/transformers/pull/17271.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17271.patch",
"merged_at": 1654693398000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17270
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17270/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17270/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17270/events
|
https://github.com/huggingface/transformers/pull/17270
| 1,236,674,115
|
PR_kwDOCUB6oc432qqR
| 17,270
|
Fix missing job action button in CI report
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
Current CI reports lack the `GitHub Action Job` button, due to the recent changes in workflow files:
- `models/bert` -> `models_bert` (was done in artifact names, but not in the matrix)
- `[single|multi]-gpu-docker` -> `[single|multi]-gpu` (was done in `notification_service.py`, but not in scheduled CI workflow)
This PR fixes the issues by:
- Let the workflow files use `single-gpu` and `multi-gpu` as matrix and artifact names. Only adds `-docker` in `runs-on:` for scheduled CI.
- Add `model.replace('models_', 'models/'`) at a proper place in `notification_service.py`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17270/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17270",
"html_url": "https://github.com/huggingface/transformers/pull/17270",
"diff_url": "https://github.com/huggingface/transformers/pull/17270.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17270.patch",
"merged_at": 1652769066000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17269
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17269/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17269/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17269/events
|
https://github.com/huggingface/transformers/pull/17269
| 1,236,663,657
|
PR_kwDOCUB6oc432ocP
| 17,269
|
Use the PR URL in push CI report
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
In the push CI report, change the URL from the (merged) commit page to the PR page (if that commit comes from a PR).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17269/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17269",
"html_url": "https://github.com/huggingface/transformers/pull/17269",
"diff_url": "https://github.com/huggingface/transformers/pull/17269.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17269.patch",
"merged_at": 1652731348000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17268
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17268/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17268/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17268/events
|
https://github.com/huggingface/transformers/issues/17268
| 1,236,622,941
|
I_kwDOCUB6oc5JtV5d
| 17,268
|
Swin Transformer V2
|
{
"login": "RyanHuangNLP",
"id": 49582480,
"node_id": "MDQ6VXNlcjQ5NTgyNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/49582480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanHuangNLP",
"html_url": "https://github.com/RyanHuangNLP",
"followers_url": "https://api.github.com/users/RyanHuangNLP/followers",
"following_url": "https://api.github.com/users/RyanHuangNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanHuangNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanHuangNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanHuangNLP/subscriptions",
"organizations_url": "https://api.github.com/users/RyanHuangNLP/orgs",
"repos_url": "https://api.github.com/users/RyanHuangNLP/repos",
"events_url": "https://api.github.com/users/RyanHuangNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanHuangNLP/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Marking this as a good first issue as Swin v2 only adds a couple of small design improvements compared to Swin v1.\r\n\r\nOne could use the [add new-model-like](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command) feature to copy all Swin files, and then implement Swin v2 by tweaking these files. ",
"Hey! Can I give it a shot?",
"@NielsRogge I would like to add this model. ",
"Hi :) sure, maybe you can give me your email addresses such that we can set up a Slack channel for coordination.",
"> Hi :) sure, maybe you can give me your email addresses such that we can set up a Slack channel for coordination.\r\n\r\nHere is mine : ritiknandwal021@gmail.com",
"Hey @NielsRogge, I'd like to help out as well. My email is srinivasansabarish@gmail.com",
"my email is joaquinrivero94@gmail.com",
"Thanks, I'll create one. You should receive an invite later today",
"Hi all is this work complete, I'd love to help if possible."
] | 1,652
| 1,658
| 1,658
|
NONE
| null |
### Model description
[Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/pdf/2111.09883.pdf)
repo origin: [Swin Transformer V2](https://github.com/microsoft/Swin-Transformer#updates)
repo timm: [Swin Transformer V2](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/swin_transformer_v2.py)
all the model the pretrain is ready
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
repo origin: [Swin Transformer](https://github.com/microsoft/Swin-Transformer#updates)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17268/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17267
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17267/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17267/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17267/events
|
https://github.com/huggingface/transformers/issues/17267
| 1,236,488,858
|
I_kwDOCUB6oc5Js1Ka
| 17,267
|
### System Info
|
{
"login": "SmileyBirdprey011",
"id": 105340574,
"node_id": "U_kgDOBkdeng",
"avatar_url": "https://avatars.githubusercontent.com/u/105340574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SmileyBirdprey011",
"html_url": "https://github.com/SmileyBirdprey011",
"followers_url": "https://api.github.com/users/SmileyBirdprey011/followers",
"following_url": "https://api.github.com/users/SmileyBirdprey011/following{/other_user}",
"gists_url": "https://api.github.com/users/SmileyBirdprey011/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SmileyBirdprey011/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SmileyBirdprey011/subscriptions",
"organizations_url": "https://api.github.com/users/SmileyBirdprey011/orgs",
"repos_url": "https://api.github.com/users/SmileyBirdprey011/repos",
"events_url": "https://api.github.com/users/SmileyBirdprey011/events{/privacy}",
"received_events_url": "https://api.github.com/users/SmileyBirdprey011/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### Cursor colour
```shell
I'm running transformers installed directly from `ee393c0`.
```
### Who can help?
@NielsRogge @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
import transformers
feature_extractor = transformers.AutoFeatureExtractor.from_pretrained("facebook/regnet-y-040")
If I don't have pillow installed, the second line fails with:
AttributeError: module transformers.models.convnext has no attribute ConvNextFeatureExtractor
If I then run `pip install pillow`, everything works as expected.
### Expected behavior
```shell
The feature extractor should be loaded successfully.
```
__Originally posted by @eric-mitchell in https://github.com/huggingface/transformers/issues/17266__
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17267/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17266
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17266/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17266/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17266/events
|
https://github.com/huggingface/transformers/issues/17266
| 1,236,470,127
|
I_kwDOCUB6oc5Jswlv
| 17,266
|
Loading facebook/regnet-y-040 FeatureExtractor fails mysteriously unless pillow is installed
|
{
"login": "eric-mitchell",
"id": 56408839,
"node_id": "MDQ6VXNlcjU2NDA4ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/56408839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eric-mitchell",
"html_url": "https://github.com/eric-mitchell",
"followers_url": "https://api.github.com/users/eric-mitchell/followers",
"following_url": "https://api.github.com/users/eric-mitchell/following{/other_user}",
"gists_url": "https://api.github.com/users/eric-mitchell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eric-mitchell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eric-mitchell/subscriptions",
"organizations_url": "https://api.github.com/users/eric-mitchell/orgs",
"repos_url": "https://api.github.com/users/eric-mitchell/repos",
"events_url": "https://api.github.com/users/eric-mitchell/events{/privacy}",
"received_events_url": "https://api.github.com/users/eric-mitchell/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Will look into this today. There should be a way to indicate the missing dependencies when the object is not found by using our dummy objects."
] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
I'm running transformers installed directly from `ee393c0`.
```
### Who can help?
@NielsRogge @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import transformers
feature_extractor = transformers.AutoFeatureExtractor.from_pretrained("facebook/regnet-y-040")
If I don't have pillow installed, the second line fails with:
AttributeError: module transformers.models.convnext has no attribute ConvNextFeatureExtractor
If I then run `pip install pillow`, everything works as expected.
### Expected behavior
```shell
The feature extractor should be loaded successfully.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17266/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17265
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17265/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17265/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17265/events
|
https://github.com/huggingface/transformers/issues/17265
| 1,236,380,210
|
I_kwDOCUB6oc5Jsaoy
| 17,265
|
OSError Directory not empty error in Trainer.py on checkpoint replacement
|
{
"login": "randywreed",
"id": 5059871,
"node_id": "MDQ6VXNlcjUwNTk4NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5059871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/randywreed",
"html_url": "https://github.com/randywreed",
"followers_url": "https://api.github.com/users/randywreed/followers",
"following_url": "https://api.github.com/users/randywreed/following{/other_user}",
"gists_url": "https://api.github.com/users/randywreed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/randywreed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/randywreed/subscriptions",
"organizations_url": "https://api.github.com/users/randywreed/orgs",
"repos_url": "https://api.github.com/users/randywreed/repos",
"events_url": "https://api.github.com/users/randywreed/events{/privacy}",
"received_events_url": "https://api.github.com/users/randywreed/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Thanks for the report! That sounds like a reasonable fix. Do you want to make a PR with it?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"What's the status of this? Is there a workaround without editing the source?",
"No PR was raised to fix it, you should go ahead if you want to contribute :-)"
] | 1,652
| 1,658
| 1,655
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 4
- Using distributed or parallel set-up in script?: deepspeed
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Create txt file of sentences.
Ran run_clm.py with following parameters:
deepspeed --num_gpus=4 run_clm.py --deepspeed ds_config_gptj6b.json --model_name_or_path EleutherAI/gpt-j-6B --train_file Jesus_sayings.txt --do_train --fp16 --overwrite_cache --evaluation_strategy=steps --output_dir ~/gpt-j/finetuned --num_train_epochs 5 --eval_steps 1 --gradient_accumulation_steps 32 --per_device_train_batch_size 1 --use_fast_tokenizer False --learning_rate 5e-06 --warmup_steps 10 --save_total_limit 2 --save_steps 1 --save_strategy steps --tokenizer_name gpt2
Error traceback:
```
[INFO|modeling_utils.py:1546] 2022-05-15 18:25:49,903 >> Model weights saved in /home/ubuntu/gpt-j/finetuned/checkpoint-3/pytorch_model.bin
[INFO|tokenization_utils_base.py:2108] 2022-05-15 18:25:49,911 >> tokenizer config file saved in /home/ubuntu/gpt-j/finetuned/checkpoint-3/tokenizer_config.json
[INFO|tokenization_utils_base.py:2114] 2022-05-15 18:25:49,917 >> Special tokens file saved in /home/ubuntu/gpt-j/finetuned/checkpoint-3/special_tokens_map.json
[2022-05-15 18:26:00,522] [INFO] [engine.py:3177:save_16bit_model] Saving model weights to /home/ubuntu/gpt-j/finetuned/checkpoint-3/pytorch_model.bin
[2022-05-15 18:26:26,263] [INFO] [logging.py:69:log_dist] [Rank 0] Saving model checkpoint: /home/ubuntu/gpt-j/finetuned/checkpoint-3/global_step3/zero_pp_rank_0_mp_rank_00_model_states.pt
[2022-05-15 18:27:44,462] [INFO] [engine.py:3063:_save_zero_checkpoint] zero checkpoint saved /home/ubuntu/gpt-j/finetuned/checkpoint-3/global_step3/zero_pp_rank_0_mp_rank_00_optim_states.pt
[INFO|trainer.py:2424] 2022-05-15 18:27:46,523 >> Deleting older checkpoint [/home/ubuntu/gpt-j/finetuned/checkpoint-1] due to args.save_total_limit
Traceback (most recent call last):
File "run_clm.py", line 575, in <module>
main()
File "run_clm.py", line 523, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1320, in train
return inner_training_loop(
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1634, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1805, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1964, in _save_checkpoint
self._rotate_checkpoints(use_mtime=True, output_dir=run_dir)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2425, in _rotate_checkpoints
shutil.rmtree(checkpoint)
File "/usr/lib/python3.8/shutil.py", line 718, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/usr/lib/python3.8/shutil.py", line 659, in _rmtree_safe_fd
onerror(os.rmdir, fullname, sys.exc_info())
File "/usr/lib/python3.8/shutil.py", line 657, in _rmtree_safe_fd
os.rmdir(entry.name, dir_fd=topfd)
OSError: [Errno 39] Directory not empty: 'global_step1'
4%|โโโ | 3/70 [21:59<8:11:00, 439.71s/it]
[2022-05-15 18:27:50,264] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 78507
[2022-05-15 18:27:50,265] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 78508
[2022-05-15 18:27:50,265] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 78509
[2022-05-15 18:27:50,266] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 78510
[2022-05-15 18:27:50,267] [ERROR] [launch.py:184:sigkill_handler] ['/usr/bin/python3', '-u', 'run_clm.py', '--local_rank=3', '--deepspeed', 'ds_config_gptj6b.json', '--model_name_or_path', 'EleutherAI/gpt-j-6B', '--train_file', 'Jesus_sayings.txt', '--do_train', '--fp16', '--overwrite_cache', '--evaluation_strategy=steps', '--output_dir', '/home/ubuntu/gpt-j/finetuned', '--num_train_epochs', '5', '--eval_steps', '1', '--gradient_accumulation_steps', '32', '--per_device_train_batch_size', '1', '--use_fast_tokenizer', 'False', '--learning_rate', '5e-06', '--warmup_steps', '10', '--save_total_limit', '2', '--save_steps', '1', '--save_strategy', 'steps', '--tokenizer_name', 'gpt2'] exits with return code = 1
```
### Expected behavior
```shell
Should delete old checkpoint without error.
Workaround:
Changed trainer.py line 2425 to
shutil.rmtree(checkpoint, ignore_errors=True)
```
This causes program to run without error but leaves behind ghost checkpoint directories with no content. Though these are gradually pruned.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17265/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17264
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17264/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17264/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17264/events
|
https://github.com/huggingface/transformers/issues/17264
| 1,236,368,770
|
I_kwDOCUB6oc5JsX2C
| 17,264
|
Problem with Adding LayerNorm after BART's Encoder for Summarization
|
{
"login": "meetdavidwan",
"id": 86493068,
"node_id": "MDQ6VXNlcjg2NDkzMDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/86493068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meetdavidwan",
"html_url": "https://github.com/meetdavidwan",
"followers_url": "https://api.github.com/users/meetdavidwan/followers",
"following_url": "https://api.github.com/users/meetdavidwan/following{/other_user}",
"gists_url": "https://api.github.com/users/meetdavidwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meetdavidwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meetdavidwan/subscriptions",
"organizations_url": "https://api.github.com/users/meetdavidwan/orgs",
"repos_url": "https://api.github.com/users/meetdavidwan/repos",
"events_url": "https://api.github.com/users/meetdavidwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/meetdavidwan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"From what I can see, here the weights of self_attn, and feed forward layers are randomly initialised, as your code changes the model structure, these weights won't bee loaded from pre-trained model. Which could explain this. Also `self.encoder` already performs attention, why do you add another attention layer after encoder ?\r\n\r\nAnd lastly, since this is a more general question and not a bug, I would suggest to post it [on the forum](https://discuss.huggingface.co/). Thanks !",
"Hi Suraj, thank you for the comment!\r\n\r\nYou are right in that the self_attn, feedforward, and the layernorms are newly intialized, but I expect them to be trained and updated and get similar performance. As you can see in my second run where I have the self_attn and feedforward (but no layernorm), it is updating correctly and achieving similar performance than without these additions (regular BART). However, only adding the layernorm to it makes the model unusable (the third run), which I believe might be a bug and not a general question (unless I am missing some crucial part).",
"@meetdavidwan note that we sadly don't have the time to answer issues that include customized model architectures. We need to limit ourselves to the officially provided implementations only sadly. The forum is the best way of getting help here I think :-)"
] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.1
- Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: tried both distributed and 1gpu. I also tried deepspeed and full 32 precision.
```
### Who can help?
@patrickvonplaten @patil-suraj
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to add additional layers/encoders after the BARTEncoder that involves all the self attention and layernorm layers, and after debugging I find that whenever I call the layernorm, the model cannot give reasonable rouge at test time. Here is the minimal reproduction code.
1. I used the `examples/pytorch/summarization/run_summarization.py`. The changes I make (which I think are harmless is commenting the version requirement and calling my own Model BARTForConditionalGenerationTest (which I am pasting below). So the change is `model = BARTForConditionalGenerationTest.from_pretrained(` instead of `model = AutoModelForSeq2SeqLM.from_pretrained(`
2. The testing model adds the self attention+layernorm module, which I copied directly from [BartEncoderLayer](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/models/bart/modeling_bart.py#L284):
```
import torch
import torch.nn as nn
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
from dataclasses import dataclass
from typing import Optional, Tuple
from transformers.models.bart.modeling_bart import (
BartForConditionalGeneration,
BartModel,
BartDecoder,
BartEncoder,
BartAttention,
shift_tokens_right,
_expand_mask,
)
from transformers.activations import ACT2FN
from transformers.modeling_outputs import (
Seq2SeqModelOutput,
Seq2SeqLMOutput,
BaseModelOutput,
)
class BARTModelTest(BartModel):
def __init__(self, config):
super().__init__(config)
# additional layer to showcase the layernorm issue
self.embed_dim = config.d_model
self.self_attn = BartAttention(
embed_dim=self.embed_dim,
num_heads=config.encoder_attention_heads,
dropout=config.attention_dropout,
)
self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)
self.dropout = config.dropout
self.activation_fn = ACT2FN[config.activation_function]
self.activation_dropout = config.activation_dropout
self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim)
self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim)
self.final_layer_norm = nn.LayerNorm(self.embed_dim)
self.post_init()
def forward(
self,
input_ids=None,
attention_mask=None,
decoder_input_ids=None,
decoder_attention_mask=None,
head_mask=None,
decoder_head_mask=None,
cross_attn_head_mask=None,
encoder_outputs=None,
past_key_values=None,
inputs_embeds=None,
decoder_inputs_embeds=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
# different to other models, Bart automatically creates decoder_input_ids from
# input_ids if no decoder_input_ids are provided
if decoder_input_ids is None and decoder_inputs_embeds is None:
if input_ids is None:
raise ValueError(
"If no `decoder_input_ids` or `decoder_inputs_embeds` are "
"passed, `input_ids` cannot be `None`. Please pass either "
"`input_ids` or `decoder_input_ids` or `decoder_inputs_embeds`."
)
decoder_input_ids = shift_tokens_right(
input_ids, self.config.pad_token_id, self.config.decoder_start_token_id
)
output_attentions = (
output_attentions
if output_attentions is not None
else self.config.output_attentions
)
output_hidden_states = (
output_hidden_states
if output_hidden_states is not None
else self.config.output_hidden_states
)
use_cache = use_cache if use_cache is not None else self.config.use_cache
return_dict = (
return_dict if return_dict is not None else self.config.use_return_dict
)
if encoder_outputs is None:
encoder_outputs = self.encoder(
input_ids=input_ids,
attention_mask=attention_mask,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
# NEW: Pass to another self attention
hidden_states = encoder_outputs.last_hidden_state
residual = hidden_states
_attention_mask = _expand_mask(attention_mask, hidden_states.dtype)
hidden_states, attn_weights, _ = self.self_attn(
hidden_states=hidden_states,
attention_mask=_attention_mask,
)
hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
hidden_states = residual + hidden_states
# Problematic LayerNorm Layer
hidden_states = self.self_attn_layer_norm(hidden_states)
residual = hidden_states
hidden_states = self.activation_fn(self.fc1(hidden_states))
hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
hidden_states = self.fc2(hidden_states)
hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
hidden_states = residual + hidden_states
# Problematic LayerNorm Layer
hidden_states = self.final_layer_norm(hidden_states)
encoder_outputs.last_hidden_state = hidden_states
# decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=encoder_outputs.last_hidden_state,
encoder_attention_mask=attention_mask,
head_mask=decoder_head_mask,
cross_attn_head_mask=cross_attn_head_mask,
past_key_values=past_key_values,
inputs_embeds=decoder_inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
if not return_dict:
return decoder_outputs + encoder_outputs
return Seq2SeqModelOutput(
last_hidden_state=decoder_outputs.last_hidden_state,
past_key_values=decoder_outputs.past_key_values,
decoder_hidden_states=decoder_outputs.hidden_states,
decoder_attentions=decoder_outputs.attentions,
cross_attentions=decoder_outputs.cross_attentions,
encoder_last_hidden_state=encoder_outputs.last_hidden_state,
encoder_attentions=encoder_outputs.attentions,
)
class BARTForConditionalGenerationTest(BartForConditionalGeneration):
def __init__(self, config):
super().__init__(config)
self.model = BARTModelTest(config)
# Initialize weights and apply final processing
self.post_init()
```
notice the lines I start with the comment `# NEW`
3. Running this on XSum with just one gpu:
```
python run_summarization.py --fp16 \
--dataset_name xsum --do_train \
--model_name facebook/bart-base \
--tokenizer_name facebook/bart-base \
--do_eval --evaluation_strategy steps --eval_steps 10 --predict_with_generate \
--per_device_train_batch_size 64 --per_device_eval_batch_size 16 \
--gradient_accumulation_steps 1 \
--learning_rate 3e-05 --weight_decay 0.01 --label_smoothing 0.1 \
--max_source_length 512 --max_target_length 64 \
--logging_step 100 --max_steps 5000 \
--warmup_steps 0 --save_steps 1000 \
--output_dir test_layernorm --max_eval_samples 10 --max_train_samples 1000 --max_predict_samples 100
```
I stop this after 30 steps.
---- Results ----
1. Running this with original `AutoModelForSeq2SeqLM`
```
{'eval_loss': 3.429733991622925, 'eval_rouge1': 35.3788, 'eval_rouge2': 11.958, 'eval_rougeL': 28.7712, 'eval_rougeLsum': 28.8147, 'eval_gen_len': 19.6, 'eval_runtime': 0.4073, 'eval_samples_per_second': 24.552, 'eval_steps_per_second': 2.455, 'epoch': 0.62}
0%|โ | 20/5000 [00:10<40:09, 2.07it/s][INFO|trainer.py:2590] 2022-05-15 14:54:19,166 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:54:19,166 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:54:19,166 >> Batch size = 16
05/15/2022 14:54:19 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 3.320158004760742, 'eval_rouge1': 30.3056, 'eval_rouge2': 10.7887, 'eval_rougeL': 28.2016, 'eval_rougeLsum': 28.0782, 'eval_gen_len': 19.8, 'eval_runtime': 0.3998, 'eval_samples_per_second': 25.01, 'eval_steps_per_second': 2.501, 'epoch': 1.25}
1%|โ | 30/5000 [00:15<41:45, 1.98it/s][INFO|trainer.py:2590] 2022-05-15 14:54:24,528 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:54:24,528 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:54:24,528 >> Batch size = 16
05/15/2022 14:54:24 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 3.2896971702575684, 'eval_rouge1': 30.415, 'eval_rouge2': 8.1278, 'eval_rougeL': 27.7237, 'eval_rougeLsum': 27.6498, 'eval_gen_len': 20.0, 'eval_runtime': 0.3894, 'eval_samples_per_second': 25.681, 'eval_steps_per_second': 2.568, 'epoch': 1.88}
```
2. Running with my model but commenting out the two lines that calls the layernorms (i.e. `hidden_states = self.self_attn_layer_norm(hidden_states)` and `hidden_states = self.final_layer_norm(hidden_states)`)
```
{'eval_loss': 3.460312604904175, 'eval_rouge1': 32.4359, 'eval_rouge2': 9.7464, 'eval_rougeL': 27.5792, 'eval_rougeLsum': 27.4135, 'eval_gen_len': 19.1, 'eval_runtime': 1.0524, 'eval_samples_per_second': 9.502, 'eval_steps_per_second': 0.95, 'epoch': 0.62}
0%|โ | 20/5000 [00:12<46:20, 1.79it/s][INFO|trainer.py:2590] 2022-05-15 14:57:13,684 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:57:13,684 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:57:13,684 >> Batch size = 16
05/15/2022 14:57:14 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 3.37113881111145, 'eval_rouge1': 29.4708, 'eval_rouge2': 7.4381, 'eval_rougeL': 24.7256, 'eval_rougeLsum': 24.5516, 'eval_gen_len': 19.9, 'eval_runtime': 0.7387, 'eval_samples_per_second': 13.538, 'eval_steps_per_second': 1.354, 'epoch': 1.25}
1%|โ | 30/5000 [00:18<47:48, 1.73it/s][INFO|trainer.py:2590] 2022-05-15 14:57:20,076 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:57:20,076 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:57:20,076 >> Batch size = 16
05/15/2022 14:57:20 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 3.33235239982605, 'eval_rouge1': 33.9623, 'eval_rouge2': 11.8778, 'eval_rougeL': 30.1785, 'eval_rougeLsum': 30.1524, 'eval_gen_len': 19.7, 'eval_runtime': 0.7438, 'eval_samples_per_second': 13.444, 'eval_steps_per_second': 1.344, 'epoch': 1.88}
```
3. Running my model with the layernorms:
```
{'eval_loss': 9.264244079589844, 'eval_rouge1': 8.4575, 'eval_rouge2': 0.0, 'eval_rougeL': 7.8523, 'eval_rougeLsum': 7.8706, 'eval_gen_len': 20.0, 'eval_runtime': 0.7076, 'eval_samples_per_second': 14.133, 'eval_steps_per_second': 1.413, 'epoch': 0.62}
0%|โ | 20/5000 [00:11<45:57, 1.81it/s][INFO|trainer.py:2590] 2022-05-15 14:58:27,171 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:58:27,172 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:58:27,172 >> Batch size = 16
05/15/2022 14:58:27 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 8.134066581726074, 'eval_rouge1': 14.0672, 'eval_rouge2': 1.2222, 'eval_rougeL': 12.6982, 'eval_rougeLsum': 13.1708, 'eval_gen_len': 18.3, 'eval_runtime': 0.7573, 'eval_samples_per_second': 13.205, 'eval_steps_per_second': 1.32, 'epoch': 1.25}
1%|โ | 30/5000 [00:17<47:47, 1.73it/s][INFO|trainer.py:2590] 2022-05-15 14:58:33,581 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:58:33,581 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:58:33,581 >> Batch size = 16
05/15/2022 14:58:34 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 7.54071569442749, 'eval_rouge1': 5.2054, 'eval_rouge2': 0.0, 'eval_rougeL': 5.0935, 'eval_rougeLsum': 5.1303, 'eval_gen_len': 11.5, 'eval_runtime': 0.7393, 'eval_samples_per_second': 13.526, 'eval_steps_per_second': 1.353, 'epoch': 1.88}
```
### Expected behavior
```shell
I expect the model to still work in a reasonable way (generating summaries), but in my own code and custom data, I do see the loss goes down and to a similar loss value than without using layernorm (which you can also see here) but the ROUGE score during evaluation is always around 3 (or a nonsensical value that does not improve). I think what I am doing here is essentially just add another BartEncoderLayer?
Any help would be appreciated! Thank you!
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17264/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17263
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17263/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17263/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17263/events
|
https://github.com/huggingface/transformers/pull/17263
| 1,236,357,408
|
PR_kwDOCUB6oc431nt7
| 17,263
|
docs(transformers): fix typo
|
{
"login": "k-zehnder",
"id": 51463990,
"node_id": "MDQ6VXNlcjUxNDYzOTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/51463990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/k-zehnder",
"html_url": "https://github.com/k-zehnder",
"followers_url": "https://api.github.com/users/k-zehnder/followers",
"following_url": "https://api.github.com/users/k-zehnder/following{/other_user}",
"gists_url": "https://api.github.com/users/k-zehnder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/k-zehnder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/k-zehnder/subscriptions",
"organizations_url": "https://api.github.com/users/k-zehnder/orgs",
"repos_url": "https://api.github.com/users/k-zehnder/repos",
"events_url": "https://api.github.com/users/k-zehnder/events{/privacy}",
"received_events_url": "https://api.github.com/users/k-zehnder/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
[goal] - fix typo transformers docs
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17263/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17263",
"html_url": "https://github.com/huggingface/transformers/pull/17263",
"diff_url": "https://github.com/huggingface/transformers/pull/17263.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17263.patch",
"merged_at": 1652735070000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17262
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17262/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17262/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17262/events
|
https://github.com/huggingface/transformers/pull/17262
| 1,236,321,146
|
PR_kwDOCUB6oc431gz3
| 17,262
|
Spanish translation of the files sagemaker.mdx and image_classification.mdx
|
{
"login": "SimplyJuanjo",
"id": 87780148,
"node_id": "MDQ6VXNlcjg3NzgwMTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/87780148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SimplyJuanjo",
"html_url": "https://github.com/SimplyJuanjo",
"followers_url": "https://api.github.com/users/SimplyJuanjo/followers",
"following_url": "https://api.github.com/users/SimplyJuanjo/following{/other_user}",
"gists_url": "https://api.github.com/users/SimplyJuanjo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SimplyJuanjo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SimplyJuanjo/subscriptions",
"organizations_url": "https://api.github.com/users/SimplyJuanjo/orgs",
"repos_url": "https://api.github.com/users/SimplyJuanjo/repos",
"events_url": "https://api.github.com/users/SimplyJuanjo/events{/privacy}",
"received_events_url": "https://api.github.com/users/SimplyJuanjo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your contribution!",
"Muchas gracias @SimplyJuanjo for the PR! ๐ค Please let me know if you wish to translate another one. "
] | 1,652
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds the Spanish version of [sagemaker.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/en/sagemaker.mdx) and [image_classification.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/tasks/image_classification.mdx) to [transformers/docs/source/es](https://github.com/huggingface/transformers/tree/master/docs/source/es)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/15947 (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@omarespejel @osanseviero @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17262/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17262",
"html_url": "https://github.com/huggingface/transformers/pull/17262",
"diff_url": "https://github.com/huggingface/transformers/pull/17262.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17262.patch",
"merged_at": 1653520216000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17261
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17261/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17261/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17261/events
|
https://github.com/huggingface/transformers/pull/17261
| 1,236,274,732
|
PR_kwDOCUB6oc431YKM
| 17,261
|
TF - Fix convnext classification example
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
MEMBER
| null |
# What does this PR do?
Fixes what probably was copy-paste mistake. As visible [here](https://huggingface.co/docs/transformers/main/en/model_doc/convnext#transformers.TFConvNextForImageClassification.call.example) -- the example doesn't run atm, it does with the fix.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17261/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17261",
"html_url": "https://github.com/huggingface/transformers/pull/17261",
"diff_url": "https://github.com/huggingface/transformers/pull/17261.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17261.patch",
"merged_at": 1652700241000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17260
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17260/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17260/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17260/events
|
https://github.com/huggingface/transformers/issues/17260
| 1,236,209,726
|
I_kwDOCUB6oc5JrxA-
| 17,260
|
Missing of token_type_ids parameter in OPTForCausalLM.forward
|
{
"login": "Tuan-Lee-23",
"id": 43035837,
"node_id": "MDQ6VXNlcjQzMDM1ODM3",
"avatar_url": "https://avatars.githubusercontent.com/u/43035837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tuan-Lee-23",
"html_url": "https://github.com/Tuan-Lee-23",
"followers_url": "https://api.github.com/users/Tuan-Lee-23/followers",
"following_url": "https://api.github.com/users/Tuan-Lee-23/following{/other_user}",
"gists_url": "https://api.github.com/users/Tuan-Lee-23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tuan-Lee-23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tuan-Lee-23/subscriptions",
"organizations_url": "https://api.github.com/users/Tuan-Lee-23/orgs",
"repos_url": "https://api.github.com/users/Tuan-Lee-23/repos",
"events_url": "https://api.github.com/users/Tuan-Lee-23/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tuan-Lee-23/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello @Tuan-Lee-23!\r\n\r\n@patrickvonplaten, @younesbelkada or @ArthurZucker can give more details, but we aim to follow the original implementation as closely as possible. The original implementation does not leverage token type IDs and nor do the checkpoints, so there was no need to implement them for OPT.\r\n\r\nToken Type IDs are not a required parameter for several models, so this is not an isolated case.",
"I'm sorry that I didn't notice about the original implementation of OPT\r\n@LysandreJik Thank you for your clarification ",
"First of all the issues of non attendance or lack of participation is due to the situation with the hack/ bot whatever it os limits my screen time talk time communications all at will. Sorry but Iโm no coder and came here to get hrlelp to learn how to take care of my own ",
"Retaliation sucks"
] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### Feature request
According to OPT's [document](https://huggingface.co/docs/transformers/model_doc/opt#transformers.OPTForCausalLM)
the OPTForCausalLM's forward method is missing a parameter `token_type_ids`.
I notice that other GPT variants almost have the `token_type_ids` in their forward method
Please upgrade it for our community
### Motivation
None
### Your contribution
None
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17260/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17259
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17259/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17259/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17259/events
|
https://github.com/huggingface/transformers/issues/17259
| 1,236,202,917
|
I_kwDOCUB6oc5JrvWl
| 17,259
|
TROCR truncating output string
|
{
"login": "ronin2304",
"id": 105577596,
"node_id": "U_kgDOBkr8fA",
"avatar_url": "https://avatars.githubusercontent.com/u/105577596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ronin2304",
"html_url": "https://github.com/ronin2304",
"followers_url": "https://api.github.com/users/ronin2304/followers",
"following_url": "https://api.github.com/users/ronin2304/following{/other_user}",
"gists_url": "https://api.github.com/users/ronin2304/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ronin2304/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ronin2304/subscriptions",
"organizations_url": "https://api.github.com/users/ronin2304/orgs",
"repos_url": "https://api.github.com/users/ronin2304/repos",
"events_url": "https://api.github.com/users/ronin2304/events{/privacy}",
"received_events_url": "https://api.github.com/users/ronin2304/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"The [generate](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate) method provides an argument called `max_length` which specifies the max number of tokens to generate.\r\n\r\nNote that generation stops when the end-of-sequence token is generated.",
"> The [generate](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate) method provides an argument called `max_length` which specifies the max number of tokens to generate.\r\n> \r\n> Note that generation stops when the end-of-sequence token is generated.\r\n@NielsRogge Thank you very much ! \r\nCheers\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,652
| 1,655
| 1,655
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.8.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: (False)
- Using distributed or parallel set-up in script?: (False)
```
### Who can help?
@NielsRogge Hi, first of all , thank you for providing such a comprehensive port. I am in general impressed with the output of the model , and I haven't yet tuned it, just testing however I have an issue with long Input strings while testing TROCR, I have already searched the documentation but could find a parameter for inference. Example 1:

Output would be: 'AUGENMENISKUSHORIZONTAL- / LAPPEN- /R'
I assumed there might be a max_lenght ~32 so I tried

Output would be :'SUPERCALIFRAGISLISTICEXPIALIDOCIOUSFANTAST'
Can you help me out here? Thanks in advance
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
using script:
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
#device = "cuda" if torch.cuda.is_available() else "cpu"
print_processor = TrOCRProcessor.from_pretrained('microsoft/trocr-large-printed')
print_model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-large-printed')#.to(device)
def ocr_print_image(src_img):
pixel_values = print_processor(images=src_img, return_tensors="pt").pixel_values
generated_ids = print_model.generate(pixel_values)
return print_processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
handwriting1 = Image.open(r'test_image.jpg')
ocr_print_image(handwriting1)
### Expected behavior
```shell
I would like to have an output that is not 'capped' or truncated as it seems. I realize this is edge cases however especially example one is real world and can occur often.
Thanks again.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17259/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17258
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17258/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17258/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17258/events
|
https://github.com/huggingface/transformers/issues/17258
| 1,236,152,415
|
I_kwDOCUB6oc5JrjBf
| 17,258
|
run_clm.py exits with error -9 on checkpoint restart
|
{
"login": "randywreed",
"id": 5059871,
"node_id": "MDQ6VXNlcjUwNTk4NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5059871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/randywreed",
"html_url": "https://github.com/randywreed",
"followers_url": "https://api.github.com/users/randywreed/followers",
"following_url": "https://api.github.com/users/randywreed/following{/other_user}",
"gists_url": "https://api.github.com/users/randywreed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/randywreed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/randywreed/subscriptions",
"organizations_url": "https://api.github.com/users/randywreed/orgs",
"repos_url": "https://api.github.com/users/randywreed/repos",
"events_url": "https://api.github.com/users/randywreed/events{/privacy}",
"received_events_url": "https://api.github.com/users/randywreed/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Looks like the error was thrown by DeepSpeed reloading the checkpoint, so maybe your issue would be better suited in their repo? Also cc @stas00 for information.",
"Based on your report I don't think it has anything to do with Deepspeed.\r\n\r\n```\r\n[2022-05-15 00:12:57,785] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 97277\r\n[2022-05-15 00:12:57,786] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 97278\r\n```\r\n\r\nIt looks like `cgroups` or oom killer or whatever resource control system you use killed the launcher process, which killed the other processes. Check `dmesg` to see if you get a report of a process killed by kernel or a helper util.\r\n\r\nDo you have enough CPU memory to load the checkpoint? It's possible that there was more CPU RAM at the start and then it got reduced when you restarted?\r\n\r\nYou could activate `--skip_memory_metrics 0` with just a few steps and get stats on how much CPU memory your script is using in each stage, then comparing to your free CPU memory.\r\n\r\nAdding swap memory often can help with the situation. Let me know if you want instructions for that.\r\n\r\nHow much CPU memory do you have on this host? Do you use it for training only or is it part of a desktop that you use for other things.",
"@stas00 So this is a cloud system used just for finetuning. It has 200gb of Ram, 2 48gb GPUs. But I set up a ram tracker and it hits 100% when it fails. So that's clearly the problem. Is the swap you are talking about different than a normal swap file in ubuntu?\r\n",
"As I suspected. Those resource controlling tools aren't very user-friendly. I just run a lot into this `Killed` w/o any explanation use-case on HPC so I know to suspect these.\r\n\r\nYes, normal swap. The question is whether the `cgroups` is set to have swap help out. Doesn't hurt to try.\r\n\r\n`cgroups` typically monitors the residential memory, so the unused memory will go to swap if there is one.\r\n\r\nHere is how I normally add it (of course edit the paths):\r\n\r\n```\r\n\r\n### Add a new swap file or extend one ###\r\n\r\n# turn off all swap processes\r\nsudo swapoff -a\r\n\r\n# add 128GB file (or resize it if it already exists)\r\nsudo dd if=/dev/zero of=/mnt/nvme0/swapfile bs=1G count=128\r\n\r\n# prep as swap\r\nsudo chmod 600 /mnt/nvme0/swapfile\r\nsudo chown root.root /mnt/nvme0/swapfile\r\nsudo mkswap /mnt/nvme0/swapfile\r\n\r\n# activate the swap file\r\nsudo swapon /mnt/nvme0/swapfile\r\n\r\n# check the amount of swap available\r\ngrep SwapTotal /proc/meminfo\r\n\r\n# to make permanent add to /etc/fstab if it isnโt already there\r\n/mnt/nvme0/swapfile none swap sw 0 0\r\n```",
"Now the more interesting question is why 200GB of RAM is not enough. Can you tell how much is available when you start the program?\r\n\r\nTo load the model on each process would be 2x - so 2*24GB = just 48 GB, which is temp memory and is freed once you moved models to gpu. \r\n\r\nThen you have 6*18=108GB just for the weights, optim states and grads and then some for activations. So you can't fully fit those into 2x 48GB gpus\r\n\r\nAnd so you're using the CPU offload, and that's where you run out of memory as you're offloading both optim states and the params.\r\n\r\nBut given that you have 2x 48GB GPU - you probably don't need to offload both, params and optim stages - how about just offloading the optim states, i.e. set:\r\n\r\n```\r\n \"offload_param\": {\r\n \"device\": \"none\",\r\n \"pin_memory\": false\r\n },\r\n```\r\n\r\nAlso monitor `nvidia-smi` and watch your gpu memory usage - I bet at the moment it's barely being used. (I use an alias `wn=watch -n 1 nvidia-smi`)\r\n\r\nAnd additionally you can play with the buffer sizes, please have a look at the discussion here:\r\nhttps://huggingface.co/docs/transformers/main/main_classes/deepspeed#zero3-config\r\n\r\nI'm talking about tweaking these param:\r\n```\r\n \"stage3_max_live_parameters\": 1e9,\r\n \"stage3_max_reuse_distance\": 1e9,\r\n \"stage3_gather_16bit_weights_on_model_save\": true\r\n```\r\n\r\nAdditionally this is an expensive config for when you save the checkpoint as it has to reconstruct the model on one gpu, so you can turn it to `False`:\r\n\r\n```\r\n \"stage3_gather_16bit_weights_on_model_save\": true\r\n```\r\nand then use zero_to_fp32 to extrace the fp32 weights:\r\nhttps://huggingface.co/docs/transformers/main/main_classes/deepspeed#getting-the-model-weights-out\r\n\r\nThe config in the docs is sort of a generic one that fits many cases and requires a finetuning in some cases to fit a specific use case.\r\n\r\nI know this can appear complex, so please don't hesitate to ask questions and I'm sure we will figure out how to fit your model finetuning into 200GB CPU RAM and 96GB GPU RAM.\r\n\r\n",
"@stas00 So I ran it again, just as it was and this time mapped the gpu and the cpu memory. And you are right (again), the gpu sits at 93% free while the checkpoint is loaded completely in to cpu ram which when the checkpoint load starts is 76% used. Is it weird that this happens only in restarting from a checkpoint and not during training? Uninterrupted the model trains without ever exceeding the cpu memory max.\r\n\r\nHere's what I tried in response to your helpful comments:\r\n\r\nThe real answer was adding a 200gb swap file. That got it over the hump.\r\n\r\nChanging offload_param to \"none\" meant that 60% of the gpu was used. But it still maxed out cpu memory. \r\n \r\nI Changed the weights_on_model to false and I looked at the documentation on the stage3_max parameters, but it was unclear what was a significant change. I increased it to 2e9 and then 3e9, but it still maxed out the cpu memory. Seemed like with 3e9 on of the gpu's increased use slightly 60%->62%. Still not of that prevented getting to maxed out cpu memory.\r\n",
"I'm glad you found a workaround, @randywreed \r\n\r\nIt's odd that you see a different pattern during initial training vs. same but loaded from a checkpoint - perhaps a model is leaked somewhere - or may b e a bug in deepspeed where it allocates everything on CPU even when it shouldn't?\r\n\r\nLet me see if I can try to analyze the memory usage during different stages. At the very least we will have a map of what we should expect.",
"OK, I was able to investigate this some more and found the explanation for the problem you're experiencing.\r\n\r\ntldr; the problem is the overhead of `torch.load` for the optim states at resume which don't exist when you finetune the first time. the file is huge and thus requires a ton of additional CPU peak memory.\r\n\r\nThe full analysis:\r\n\r\nI'm going to offload only optimizer states, that is:\r\n\r\n```\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": true\r\n },\r\n \"offload_param\": {\r\n \"device\": \"none\",\r\n \"pin_memory\": true\r\n },\r\n [...]\r\n```\r\n\r\nLet's take a smaller gpt2-large model so it's faster to run:\r\n\r\n```\r\n# 1. create checkpoint\r\n\r\ndeepspeed --num_gpus=1 examples/pytorch/language-modeling/run_clm.py \\\r\n--model_name_or_path gpt2-large --train_file tests/fixtures/sample_text.txt \\\r\n--do_train --fp16 --evaluation_strategy=steps --output_dir xxx \\\r\n--num_train_epochs 1 --eval_steps 1 --gradient_accumulation_steps 1 \\\r\n--per_device_train_batch_size 2 --use_fast_tokenizer False --learning_rate \\\r\n5e-06 --warmup_steps 10 --save_steps 1 --save_strategy steps --tokenizer_name \\\r\ngpt2 --max_train_samples 2 --max_eval_samples 2 --deepspeed \\\r\ntests/deepspeed/ds_config_zero3.json --skip_memory_metrics 0 \\\r\n--overwrite_output_dir\r\n\r\n\r\n\r\n***** train metrics *****\r\n before_init_mem_cpu = 3745MB\r\n before_init_mem_gpu = 1786MB\r\n epoch = 1.0\r\n init_mem_cpu_alloc_delta = 0MB\r\n init_mem_cpu_peaked_delta = 0MB\r\n init_mem_gpu_alloc_delta = 0MB\r\n init_mem_gpu_peaked_delta = 0MB\r\n train_loss = 3.793\r\n train_mem_cpu_alloc_delta = 17578MB\r\n train_mem_cpu_peaked_delta = 2952MB\r\n train_mem_gpu_alloc_delta = -148MB\r\n train_mem_gpu_peaked_delta = 6372MB\r\n train_runtime = 0:00:13.66\r\n train_samples = 1\r\n train_samples_per_second = 0.073\r\n train_steps_per_second = 0.073\r\n\r\n***** eval metrics *****\r\n epoch = 1.0\r\n eval_accuracy = 0.2344\r\n eval_loss = 4.0664\r\n eval_mem_cpu_alloc_delta = 0MB\r\n eval_mem_cpu_peaked_delta = 0MB\r\n eval_mem_gpu_alloc_delta = 0MB\r\n eval_mem_gpu_peaked_delta = 1377MB\r\n eval_runtime = 0:00:01.58\r\n eval_samples = 1\r\n eval_samples_per_second = 0.632\r\n eval_steps_per_second = 0.632\r\n perplexity = 58.3469\r\n\r\n# 2. resume from checkpoint\r\n\r\ndeepspeed --num_gpus=1 examples/pytorch/language-modeling/run_clm.py \\\r\n--model_name_or_path gpt2-large --train_file tests/fixtures/sample_text.txt \\\r\n--do_train --fp16 --evaluation_strategy=steps --output_dir xxx \\\r\n--num_train_epochs 1 --eval_steps 1 --gradient_accumulation_steps 1 \\\r\n--per_device_train_batch_size 2 --use_fast_tokenizer False --learning_rate \\\r\n5e-06 --warmup_steps 10 --save_steps 1 --save_strategy steps --tokenizer_name \\\r\ngpt2 --max_train_samples 2 --max_eval_samples 2 --deepspeed \\\r\ntests/deepspeed/ds_config_zero3.json --skip_memory_metrics 0 \\\r\n--resume_from_checkpoint xxx/checkpoint-1\r\n\r\n\r\n***** train metrics *****\r\n before_init_mem_cpu = 4618MB\r\n before_init_mem_gpu = 1786MB\r\n epoch = 1.0\r\n init_mem_cpu_alloc_delta = 0MB\r\n init_mem_cpu_peaked_delta = 0MB\r\n init_mem_gpu_alloc_delta = 0MB\r\n init_mem_gpu_peaked_delta = 0MB\r\n train_loss = 0.0\r\n train_mem_cpu_alloc_delta = 16071MB\r\n train_mem_cpu_peaked_delta = 14762MB\r\n train_mem_gpu_alloc_delta = -148MB\r\n train_mem_gpu_peaked_delta = 0MB\r\n train_runtime = 0:00:00.00\r\n train_samples = 1\r\n train_samples_per_second = 135.318\r\n train_steps_per_second = 135.318\r\n\r\n***** eval metrics *****\r\n epoch = 1.0\r\n eval_accuracy = 0.2344\r\n eval_loss = 4.0664\r\n eval_mem_cpu_alloc_delta = 9MB\r\n eval_mem_cpu_peaked_delta = 0MB\r\n eval_mem_gpu_alloc_delta = -2MB\r\n eval_mem_gpu_peaked_delta = 146MB\r\n eval_runtime = 0:00:00.41\r\n eval_samples = 1\r\n eval_samples_per_second = 2.399\r\n eval_steps_per_second = 2.399\r\n perplexity = 58.3469\r\n\r\n```\r\n\r\nAs you can see in `train_mem_cpu_alloc_delta+train_mem_cpu_peaked_delta` numbers - the resuming one took more than 10GB extra of CPU peak memory.\r\n\r\nI was able to reproduce your issue with GPT-J-6 on a somewhat similar setup, except using one 80GB gpu.\r\n\r\nAnd I forced max 100GB CPU and 50GB RAM with:\r\n\r\n```\r\nsystemd-run --user --scope -p MemoryHigh=100G -p MemoryMax=100G -p MemorySwapMax=50G bash\r\n```\r\n\r\nNow the same commands but with `EleutherAI/gpt-j-6B`:\r\n```\r\n\r\ndeepspeed --num_gpus=1 examples/pytorch/language-modeling/run_clm.py \\\r\n--model_name_or_path EleutherAI/gpt-j-6B --train_file \\\r\ntests/fixtures/sample_text.txt --do_train --fp16 --evaluation_strategy=steps \\\r\n--output_dir xxx --num_train_epochs 1 --eval_steps 1 \\\r\n--gradient_accumulation_steps 1 --per_device_train_batch_size 2 \\\r\n--use_fast_tokenizer False --learning_rate 5e-06 --warmup_steps 10 \\\r\n--save_steps 1 --save_strategy steps --tokenizer_name gpt2 --max_train_samples \\\r\n2 --max_eval_samples 2 --deepspeed tests/deepspeed/ds_config_zero3.json \\\r\n--skip_memory_metrics 0 --overwrite_output_dir\r\n\r\n***** train metrics *****\r\n before_init_mem_cpu = 3714MB\r\n before_init_mem_gpu = 12438MB\r\n epoch = 1.0\r\n init_mem_cpu_alloc_delta = 0MB\r\n init_mem_cpu_peaked_delta = 0MB\r\n init_mem_gpu_alloc_delta = 0MB\r\n init_mem_gpu_peaked_delta = 0MB\r\n train_loss = 2.6777\r\n train_mem_cpu_alloc_delta = 66213MB\r\n train_mem_cpu_peaked_delta = 32635MB\r\n train_mem_gpu_alloc_delta = 33MB\r\n train_mem_gpu_peaked_delta = 11879MB\r\n train_runtime = 0:07:44.12\r\n train_samples = 1\r\n train_samples_per_second = 0.002\r\n train_steps_per_second = 0.002\r\n\r\n***** eval metrics *****\r\n epoch = 1.0\r\n eval_accuracy = 0.4531\r\n eval_loss = 2.5234\r\n eval_mem_cpu_alloc_delta = 25MB\r\n eval_mem_cpu_peaked_delta = 0MB\r\n eval_mem_gpu_alloc_delta = 0MB\r\n eval_mem_gpu_peaked_delta = 2319MB\r\n eval_runtime = 0:00:00.58\r\n eval_samples = 1\r\n eval_samples_per_second = 1.708\r\n eval_steps_per_second = 1.708\r\n perplexity = 12.4714\r\n\r\nnote the huge size of the checkpoint it needs to load into cpu:\r\n\r\n$ ls -l xxx/checkpoint-1/global_step1/\r\ntotal 68G\r\n-rw-rw-r-- 1 stas stas 113M May 20 17:51 zero_pp_rank_0_mp_rank_00_model_states.pt\r\n-rw-rw-r-- 1 stas stas 68G May 20 17:56 zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n\r\n\r\n# 2. resume from checkpoint\r\n\r\ndeepspeed --num_gpus=1 examples/pytorch/language-modeling/run_clm.py \\\r\n--model_name_or_path EleutherAI/gpt-j-6B --train_file \\\r\ntests/fixtures/sample_text.txt --do_train --fp16 --evaluation_strategy=steps \\\r\n--output_dir xxx --num_train_epochs 1 --eval_steps 1 \\\r\n--gradient_accumulation_steps 1 --per_device_train_batch_size 2 \\\r\n--use_fast_tokenizer False --learning_rate 5e-06 --warmup_steps 10 \\\r\n--save_steps 1 --save_strategy steps --tokenizer_name gpt2 --max_train_samples \\\r\n2 --max_eval_samples 2 --deepspeed tests/deepspeed/ds_config_zero3.json \\\r\n--skip_memory_metrics 0 --resume_from_checkpoint xxx/checkpoint-1\r\n\r\n\r\n[....]\r\n[INFO|deepspeed.py:449] 2022-05-20 17:58:56,691 >> Attempting to resume from xxx/checkpoint-1\r\n[2022-05-20 18:00:08,066] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 121940\r\n```\r\n\r\nThe problem is that `torch.save` saves one param at a time, but `torch.load` loads the whole thing to CPU memory at once. That's why the first finetuning works, but resuming doesn't.\r\n\r\n(edit: this is an incorrect statement as I show in the next comment)\r\n\r\nThe second stage needs some additional 70GB of CPU RAM.\r\n\r\nIn the past Deepspeed devs and Deepspeed users all used huge expensive DGX servers which had TBs of CPU RAM so nobody was worried about \"normal\" users with little CPU RAM. Things are shifting and slowly slowly the Deepspeed team is moving towards embracing the low resource stage.\r\n\r\nTunji and I are working on an universal checkpoint format where each param and optim states are saved as separate files and thus can be loaded on a tiny amount of CPU memory. Currently we are working on the Megatron-Deepspeed checkpoints since we need it for manipulating the 176B checkpoint, which is much bigger than 6B of GPT-J-6. If all goes well this work will eventually end up in normal ZeRO stages as well. The current `torch.load()` to cpu is simply not an option we can continue with.\r\n\r\nSo for now please use the swap memory workaround, it shouldn't impact anything other than making the startup a bit slower.\r\n\r\n--------------\r\n\r\nThe other issue this Issue has shined light to is not having much flexibility about how much is offloaded to CPU - same historical observation applies here. Currently one can only offload 12x or 14x params (8+4 for optim states and 2 for half precision params) and the GPU remains mainly empty, which is far from good utilization of resources. I have passed to Tunji a request to support more flexible offloading in the future. Let's see what comes out of it.\r\n\r\n--------------\r\n\r\nBoth issues are Deepspeed's core issues so there is not much we can do at the HF side to make thing better.\r\n\r\nIf you have any other questions and want me to explain anything please don't hesitate to ask. and if all is clear please feel free to close this issue.\r\n\r\nThank you for your patience, @randywreed \r\n\r\n\r\nAlso cc: @tjruwase for awareness.\r\n\r\n",
"I was digging some more into this and noticed that `torch.load` is symmetrical to `torch.save` when it comes to a model on gpu <=> disc if `map_location=\"cuda\"` is used - it doesn't copy it fully to CPU memory first - it will use CPU peak memory of the size of the largest entry in the state_dict. \r\n\r\nWe can see that empirically through the following test:\r\n\r\nHere is a 12GB checkpoint.\r\n\r\n```\r\n$ ls -l xxx/checkpoint-1/pytorch_model.bin\r\n-rw-rw-r-- 1 stas stas 12G May 20 17:51 xxx/checkpoint-1/pytorch_model.bin\r\n```\r\n\r\nLet's load it to gpu:\r\n```\r\n$ /usr/bin/time -f %M python -c 'import torch; _=torch.load\r\n(\"xxx/checkpoint-1/pytorch_model.bin\", map_location=\"cuda\")\r\n3279196\r\n```\r\n\r\nIt used only 3GB of CPU RAM in total. (Largest key)\r\n\r\nand of course, let's check the baseline of loading to cpu:\r\n\r\n```\r\n$ /usr/bin/time -f %M python -c 'import torch; _=torch.load\r\n(\"xxx/checkpoint-1/pytorch_model.bin\", map_location=\"cpu\")'\r\n12167976\r\n```\r\n\r\nIt used 12GB of CPU RAM in total as the size of the checkpoint.\r\n\r\n\r\nLooking deeper it appears that the issue is on the deepspeed side. It loads the checkpoint into cpu first:\r\n\r\nhttps://github.com/microsoft/DeepSpeed/blob/5208eb73da5269034ded69c4dd7c4bff81df81e7/deepspeed/runtime/engine.py#L2748\r\n\r\nand hence the huge additional peak memory usage.\r\n\r\nI filed an issue https://github.com/microsoft/DeepSpeed/issues/1971",
"Thanks for the help on this. I appreciate it."
] | 1,652
| 1,653
| 1,653
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 2
- Using distributed or parallel set-up in script?: deepspeed
```
### Who can help?
@patil-suraj @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I was running run_clm.py on my own custom data on GPT-J6B. I ran out of disk space, and restarted from latest checkpoint. All seemed to restart appropriately and then the script crashed with an error -9. No error message. Full log attached.
```
Using /home/ubuntu/.cache/torch_extensions/py38_cu111 as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.00033783912658691406 seconds
[INFO|deepspeed.py:449] 2022-05-15 00:12:20,447 >> Attempting to resume from finetuned/checkpoint-22
[2022-05-15 00:12:51,292] [INFO] [engine.py:2754:_get_all_zero_checkpoint_state_dicts] successfully read 2 ZeRO state_dicts for rank 0
[2022-05-15 00:12:57,785] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 97277
[2022-05-15 00:12:57,786] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 97278
[2022-05-15 00:12:57,786] [ERROR] [launch.py:184:sigkill_handler] ['/usr/bin/python3', '-u', 'run_clm.py', '--local_rank=1', '--deepspeed', './Finetune_GPTNEO_GPTJ6B/finetuning_repo/ds_config_gptj6b.json', '--model_name_or_path', 'EleutherAI/gpt-j-6B', '--train_file', 'Jesus_sayings.txt', '--do_train', '--fp16', '--overwrite_cache', '--evaluation_strategy=steps', '--output_dir', 'finetuned', '--num_train_epochs', '5', '--eval_steps', '1', '--gradient_accumulation_steps', '32', '--per_device_train_batch_size', '1', '--use_fast_tokenizer', 'False', '--learning_rate', '5e-06', '--warmup_steps', '10', '--save_total_limit', '2', '--save_steps', '2', '--save_strategy', 'steps', '--tokenizer_name', 'gpt2'] exits with return code = -9
```
This is running on two 48gb GPUS using deepspeed. Was training without problem until crash and then on restart got the error.
Original Command:
```
deepspeed --num_gpus=2 run_clm.py --deepspeed \
./Finetune_GPTNEO_GPTJ6B/finetuning_repo/ds_config_gptj6b.json \
--model_name_or_path EleutherAI/gpt-j-6B --train_file Jesus_sayings.txt \
--do_train --fp16 --overwrite_cache --evaluation_strategy=steps --output_dir \
finetuned --num_train_epochs 5 --eval_steps 1 --gradient_accumulation_steps 32 \
--per_device_train_batch_size 1 --use_fast_tokenizer False --learning_rate \
5e-06 --warmup_steps 10 --save_total_limit 2 --save_steps 2 --save_strategy \
steps --tokenizer_name gpt2
```
ds_config_gpt6b.json
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
[full_traceback_run_CLM_error.txt](https://github.com/huggingface/transformers/files/8694097/full_traceback_run_CLM_error.txt)
"initial_scale_power": 12,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": false
},
"offload_param": {
"device": "cpu",
"pin_memory": false
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
[full_traceback_run_CLM_error.txt](https://github.com/huggingface/transformers/files/8694101/full_traceback_run_CLM_error.txt)
### Expected behavior
```shell
Should restart and continue training.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17258/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17257
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17257/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17257/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17257/events
|
https://github.com/huggingface/transformers/pull/17257
| 1,236,128,159
|
PR_kwDOCUB6oc4308md
| 17,257
|
Improve mismatched sizes management when loading a pretrained model
|
{
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I see that all tests corresponding to mismatched sizes failed. Looking at [test_modeling_common.py](https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/tests/test_modeling_common.py#L2191), I guess the current behaviour is expected.\r\n\r\nShall I close this PR and open a new issue regarding the fact that some models will fail in examples (such as Swin for image classification as explained in the first message)?",
"I'm not sure why you would open a new issue: this is not a bug and there is an option to load a model with a pretrained head that has different shapes from the checkpoint. Or do you mean the examples should have a flag to activate that option? ",
"> I'm not sure why you would open a new issue: this is not a bug and there is an option to load a model with a pretrained head that has different shapes from the checkpoint. Or do you mean the examples should have a flag to activate that option?\r\n\r\nAfter using it a bit more I realized it's not a bug indeed, sorry for the confused wording. The problem is that just changing the model used in the example may break it and I couldn't fine anywhere in the doc or in the READMEs how to solve this, I had to take a look at the code. I suggest one of the following to make it more user-friendly:\r\n- the error message suggests to add the argument `ignore_mismatched_sizes=True` to `AutoModelForXXX.from_pretrained`\r\n- adding a flag to activate that option as you propose, with a mention in the README\r\n- changing the default value of `ignore_mismatched_sizes` to `True` since a warning is displayed when sizes are different, but I guess I'm lacking of context here and I'm just considering this example use case\r\n\r\nAgain I'm certainly lacking of context here but I would be happy to modify this PR so that it makes using a different model in examples less tedious when the pretrained head has different dimensions :)",
"> * the error message suggests to add the argument `ignore_mismatched_sizes=True` to `AutoModelForXXX.from_pretrained`\r\n\r\nThis is definitely something we can add and would help the user!\r\n\r\n> * adding a flag to activate that option as you propose, with a mention in the README\r\n\r\nYes, another welcome improvement!\r\n\r\n> * changing the default value of `ignore_mismatched_sizes` to `True` since a warning is displayed when sizes are different, but I guess I'm lacking of context here and I'm just considering this example use case\r\n\r\nThis can't be done for backward compatibility reasons. In further work in `from_pretrained`, we might have a default that does this, but only for the head of the model. It's dangerous to have it enabled by default on the whole body.",
"Great, I'm going to modify this PR accordingly!",
"I just modified the *classification* examples because I'm not sure about the types of head used in other scenarios.\r\n\r\nAlso, I noticed that VSCode automatically trimmed extra whitespaces (because I configured it this way). Let me know if this is an issue and I'll revert that."
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Examples currently fail when the loaded model head has different dimensions from the expected ones. For instance, in the image classification example, if a pretrained classification head has different dimensions from the classification head to fine-tune, the current implementation will lead to this error:
```
Traceback (most recent call last):
File "run_image_classification.py", line 377, in <module>
main()
File "run_image_classification.py", line 267, in main
model = AutoModelForImageClassification.from_pretrained(
File "/home/regis/HuggingFace/dev/transformers/venv/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/regis/HuggingFace/dev/transformers/venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2067, in from_pretrained
model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model(
File "/home/regis/HuggingFace/dev/transformers/venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2276, in _load_pretrained_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for SwinForImageClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([1000, 1024]) from checkpoint, the shape in current model is torch.Size([3, 1024]).
size mismatch for classifier.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([3]).
```
To reproduce this, you can run the image classification example with Swin, such as:
```
python run_image_classification.py \
--dataset_name beans \
--output_dir /tmp/beans_outputs/ \
--remove_unused_columns False \
--do_train \
--per_device_train_batch_size 8 \
--model_name_or_path microsoft/swin-base-patch4-window7-224
```
The solution is to add the argument `ignore_mismatched_sizes=True` to the `AutoModelForXXX.from_pretrained` method. Thus, this PR does the following:
- expand the error message and suggest a solution when the error is raised
- for all classification examples, an argument `--ignore_mismatched_sizes` can now be given to adapt the dimensions of the classification head when they are different from the expected ones
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17257/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17257",
"html_url": "https://github.com/huggingface/transformers/pull/17257",
"diff_url": "https://github.com/huggingface/transformers/pull/17257.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17257.patch",
"merged_at": 1652803095000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17256
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17256/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17256/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17256/events
|
https://github.com/huggingface/transformers/issues/17256
| 1,236,104,823
|
I_kwDOCUB6oc5JrXZ3
| 17,256
|
RAG - ValueError: Columns ['embeddings'] not in the dataset. Current columns in the dataset: ['title', 'text']
|
{
"login": "deema-A",
"id": 60605574,
"node_id": "MDQ6VXNlcjYwNjA1NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/60605574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deema-A",
"html_url": "https://github.com/deema-A",
"followers_url": "https://api.github.com/users/deema-A/followers",
"following_url": "https://api.github.com/users/deema-A/following{/other_user}",
"gists_url": "https://api.github.com/users/deema-A/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deema-A/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deema-A/subscriptions",
"organizations_url": "https://api.github.com/users/deema-A/orgs",
"repos_url": "https://api.github.com/users/deema-A/repos",
"events_url": "https://api.github.com/users/deema-A/events{/privacy}",
"received_events_url": "https://api.github.com/users/deema-A/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi ! The \"embeddings\" column is computed at the line:\r\n\r\nhttps://github.com/huggingface/transformers/blob/95b6bef624bd9dfdfcdfdedd86bb2173f7fb4bfe/examples/research_projects/rag/use_own_knowledge_dataset.py#L88-L93\r\n\r\nCan you make sure this line is run and check that \"embeddings\" is in `dataset.column_names` ?",
"@deema-A,\r\n\r\nAlso note that we don't officially maintain code under `research_projects`",
"The csv files you are creating are not in the format expected by the code. \r\n\r\nThis is the line in the code that reads the csv file:\r\n` \r\n dataset = load_dataset(\r\n \"csv\", data_files=[rag_example_args.csv_path], split=\"train\", delimiter=\"\\t\", column_names=[\"title\", \"text\"]\r\n )`\r\n\r\nThere is the delimiter=\"\\t\"\r\nYou can use this to create the csv:\r\n\r\n`\r\nimport csv\r\nrow_list = [\r\n [\"title\", \"text\"],]\r\n\r\nwith open('my_knowledge_dataset.csv', 'w', newline='') as file:\r\n writer = csv.writer(file, delimiter='\\t')\r\n writer.writerows(row_list) `",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,652
| 1,659
| 1,659
|
NONE
| null |
### System Info
```shell
Hi,
INFO:__main__:Step 1 - Create the dataset
WARNING:datasets.builder:Using custom data configuration default-3b4ec65e3c3d818f
Downloading and preparing dataset csv/default to /local/data/daa2182/.cache/huggingface/modules/datasets_modules/datasets/csv/default-3b4ec65e3c3d818f/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519...
Downloading data files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 5322.72it/s]
Extracting data files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 915.59it/s]
Dataset csv downloaded and prepared to /local/data/daa2182/.cache/huggingface/modules/datasets_modules/datasets/csv/default-3b4ec65e3c3d818f/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data.
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 1262.20ba/s]
Some weights of the model checkpoint at facebook/dpr-ctx_encoder-multiset-base were not used when initializing DPRContextEncoder: ['ctx_encoder.bert_model.pooler.dense.weight', 'ctx_encoder.bert_model.pooler.dense.bias']
- This IS expected if you are initializing DPRContextEncoder from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DPRContextEncoder from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
/local/data/daa2182/anaconda/lib/python3.9/site-packages/torch/cuda/__init__.py:145: UserWarning:
NVIDIA RTX A4000 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA RTX A4000 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'DPRQuestionEncoderTokenizer'.
The class this function is called from is 'DPRContextEncoderTokenizerFast'.
INFO:__main__:Step 2 - Index the dataset
Traceback (most recent call last):
File "/local/data/daa2182/13MAy/transformers/examples/research_projects/rag/use_own_knowledge_dataset.py", line 209, in <module>
main(rag_example_args, processing_args, index_hnsw_args)
File "/local/data/daa2182/13MAy/transformers/examples/research_projects/rag/use_own_knowledge_dataset.py", line 107, in main
dataset.add_faiss_index("embeddings", custom_index=index)
File "/local/data/daa2182/anaconda/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 4197, in add_faiss_index
with self.formatted_as(type="numpy", columns=[column], dtype=dtype):
File "/local/data/daa2182/anaconda/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/local/data/daa2182/anaconda/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1809, in formatted_as
self.set_format(type, columns, output_all_columns, **format_kwargs)
File "/local/data/daa2182/anaconda/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/local/data/daa2182/anaconda/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1868, in set_format
raise ValueError(
ValueError: Columns ['embeddings'] not in the dataset. Current columns in the dataset: ['title', 'text']
I got this message every time I created a new CSV.
@patrickvonplaten
@lhoestq
thanx!
```
### Who can help?
@patrickvonplaten
@lhoestq
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
python examples/research_projects/rag/use_own_knowledge_dataset.py \
--csv_path path/to/my_csv \
--output_dir path/to/my_knowledge_dataset \
### Expected behavior
```shell
It should create a new KB
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17256/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17256/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17255
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17255/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17255/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17255/events
|
https://github.com/huggingface/transformers/pull/17255
| 1,236,100,688
|
PR_kwDOCUB6oc4303he
| 17,255
|
Added es version of bertology.mdx doc
|
{
"login": "jQuinRivero",
"id": 55513213,
"node_id": "MDQ6VXNlcjU1NTEzMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/55513213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jQuinRivero",
"html_url": "https://github.com/jQuinRivero",
"followers_url": "https://api.github.com/users/jQuinRivero/followers",
"following_url": "https://api.github.com/users/jQuinRivero/following{/other_user}",
"gists_url": "https://api.github.com/users/jQuinRivero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jQuinRivero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jQuinRivero/subscriptions",
"organizations_url": "https://api.github.com/users/jQuinRivero/orgs",
"repos_url": "https://api.github.com/users/jQuinRivero/repos",
"events_url": "https://api.github.com/users/jQuinRivero/events{/privacy}",
"received_events_url": "https://api.github.com/users/jQuinRivero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Muchas gracias @jQuinRivero for the PR! ๐ค Please let me know if you wish to translate another one. \r\n\r\n@sgugger LGTM :)"
] | 1,652
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
Fixes #15947
Added spanish version of language_modeling.mdx documentation file.
@omarespejel @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17255/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17255",
"html_url": "https://github.com/huggingface/transformers/pull/17255",
"diff_url": "https://github.com/huggingface/transformers/pull/17255.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17255.patch",
"merged_at": 1653518813000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17254
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17254/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17254/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17254/events
|
https://github.com/huggingface/transformers/pull/17254
| 1,236,026,330
|
PR_kwDOCUB6oc430qjD
| 17,254
|
Add fast tokenizer for BARTpho
|
{
"login": "datquocnguyen",
"id": 2412555,
"node_id": "MDQ6VXNlcjI0MTI1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2412555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datquocnguyen",
"html_url": "https://github.com/datquocnguyen",
"followers_url": "https://api.github.com/users/datquocnguyen/followers",
"following_url": "https://api.github.com/users/datquocnguyen/following{/other_user}",
"gists_url": "https://api.github.com/users/datquocnguyen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datquocnguyen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datquocnguyen/subscriptions",
"organizations_url": "https://api.github.com/users/datquocnguyen/orgs",
"repos_url": "https://api.github.com/users/datquocnguyen/repos",
"events_url": "https://api.github.com/users/datquocnguyen/events{/privacy}",
"received_events_url": "https://api.github.com/users/datquocnguyen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17254). All of your documentation changes will be reflected on that endpoint.",
"Following: [https://github.com/huggingface/transformers/pull/13788](https://github.com/huggingface/transformers/pull/13788)\r\nI now add a \"fast\" version of the BartphoTokenizer. \r\n@sgugger , @LysandreJik, @patil-suraj , @SaulLu and @patrickvonplaten Please could you have a look and provide your feedback? Thanks.",
"Hi @patil-suraj and @sgugger I revised the slow and fast BartphoTokenizer variants to satisfy your requirements.\r\nPlease have a look and give feedback. Thanks. \r\ncc: @SaulLu @LysandreJik ",
"Please note that the unsuccessful checks are due to the failed `test_modeling_wav2vec2_conformer.py`, not related to our BartphoTokenizer. @SaulLu \r\n",
"> Please note that the unsuccessful checks are due to the failed `test_modeling_wav2vec2_conformer.py`, not related to our BartphoTokenizer. @SaulLu\r\n\r\n@SaulLu fixed the wav2vec2_conformer tests on master",
"@datquocnguyen We can't merge anything that has any breaking change on the existing tokenizer, as I said before.",
"@sgugger Ah, I now see your point. I initially thought the code would be much nicer if I also push a new version of the slow tokenizer. But then it breaks the existing code. \r\n\r\nAnyway, the fast tokenizer would totally work without changing the original code of the slow tokenizer (as I already developed the fast_tokenizer_file), I think. I would need a bit of time to roll back the slow tokenizer to its original version. \r\n \r\n(cc @SaulLu , @LysandreJik , @patil-suraj and @patrickvonplaten )\r\n",
"Hi @SaulLu , @sgugger , @patil-suraj @LysandreJik and @patrickvonplaten\r\n\r\nIn addition to a fast BARTpho tokenizer, I also revised my code to add fast tokenizers for BERTweet and PhoBERT. Here, changes now do not break existing slow tokenizers. My hacking trick to have the same tokenization strategy for both slow and fast variants is already mentioned [here](https://github.com/huggingface/transformers/pull/17254#discussion_r878687089).\r\n\r\nPlease have a look and provide feedback. Thanks!\r\n\r\nNote that I have no idea to fix the failed test `check_code_quality` w.r.t. `black`:\r\n\r\n```\r\n#!/bin/bash -eo pipefail\r\nblack --check --preview examples tests src utils\r\nSkipping .ipynb files as Jupyter dependencies are not installed.\r\nYou can fix this by running ``pip install black[jupyter]``\r\nwould reformat src/transformers/models/bartpho/tokenization_bartpho_fast.py\r\n\r\nOh no! ๐ฅ ๐ ๐ฅ\r\n1 file would be reformatted, 1594 files would be left unchanged.\r\n\r\nExited with code exit status 1\r\n```\r\n\r\nHowever, the target file \"tokenization_bartpho_fast.py\" is left unchanged in my local machine:\r\n\r\n<img width=\"938\" alt=\"Screen Shot 2022-05-22 at 11 58 19 pm\" src=\"https://user-images.githubusercontent.com/2412555/169706748-f34f1034-93f3-48c4-937d-4126bb119d7c.png\">\r\n\r\nI think there might be an inconsistency with `black` used in my local machine and in your CI, so I could not fix it from my side. It would be great if you guys could help fix it. Thanks a lot. \r\n",
"@SaulLu Thank you very much for your detailed feedback and suggestion. Before moving forward to revise the code w.r.t. the `add_tokens` feature, it would be great if you could provide some more context/clarification on the intention of using `add_tokens`.\r\n\r\nVietnamese can be considered as an isolated language, where the (monolingual) Vietnamese lexicon of syllables contains about 8K syllable types. Using a monolingual vocab of 40K types in `vinai/bartpho-syllable` is far more than enough to cover all possible cases of Vietnamese syllables. I am currently not sure whether the `add_tokens` feature is needed in our tokenizer/model when using our tokenizer/model on Vietnamese data? ",
"@SaulLu Similarly, for monolingual models PhoBERT for Vietnamese and BERTweet for English, vocabularies of 64K subword types should be more than enough, so that we might not need to use the `add_tokens` feature, right? ",
"Hi @datquocnguyen. It's amazing that you added those two new fast tokenizers. However we need PRs to be focused on one thing. Would you terribly mind splitting it in three (one for BARTpho, one for PhoBERT and one for BERTweet)?\r\n\r\nThanks a lot!",
"> @SaulLu Thank you very much for your detailed feedback and suggestion. Before moving forward to revise the code w.r.t. the add_tokens feature, it would be great if you could provide some more context/clarification on the intention of using add_tokens.\r\n\r\n@datquocnguyen I think there are many, many use cases for `add_tokens`. But for example, we can imagine a use case where a user would like to fine-tune the model on a task that needs to identify specific tokens: like for example `\"<QUESTION>\"` and `\"<ANSWER>\"`. This method is convenient because it is unified across all tokenizers. ",
"@SaulLu Thank you very much for your feedback. \r\n\r\nI improved the hacking strategy to handle the issue with newly added tokens. \r\n\r\nAssume that the sizes of the multilingual and monolingual vocabularies are X and Y, respectively (here, X > Y, X is the `base_vocab_size` and Y is set at `mask_token_id` in our hacking strategy). Added tokens A1, A2, A3, ... would have original ids of X, X+1, X+2,... that will be mapped into new ids Y, Y+1, Y+2,..., respectively.\r\n\r\nI extended the original function `get_added_vocab` into `get_added_vocab_hacking` to extract a dictionary `added_vocab ` {A1: Y, A2: Y+1, A3: Y+2, ...} and another dictionary `id_mapping` of id mapping {X: Y, X+1: Y+1, X+2: Y+2, ...}\r\n\r\n```python\r\n def get_added_vocab_hacking(self):\r\n \"\"\"\r\n Returns the added tokens in the vocabulary as a dictionary of token to index.\r\n Returns:\r\n `Dict[str, int], Dict[int, int]`: The added tokens, and their original and new ids\r\n \"\"\"\r\n base_vocab_size = self._tokenizer.get_vocab_size(with_added_tokens=False)\r\n full_vocab_size = self._tokenizer.get_vocab_size(with_added_tokens=True)\r\n if full_vocab_size == base_vocab_size:\r\n return {}, {}\r\n\r\n # Tokens in added_vocab should have ids that are equal to or larger than the size of base_vocab\r\n added_vocab = dict(\r\n (self._tokenizer.id_to_token(index), index + 1 - base_vocab_size + self.mask_token_id)\r\n for index in range(base_vocab_size, full_vocab_size)\r\n )\r\n\r\n id_mapping = dict((index, self._tokenizer.token_to_id(tok)) for tok, index in added_vocab.items())\r\n\r\n return added_vocab, id_mapping\r\n```\r\n\r\nSo in tokenization, the previous strategy maps all ids larger than `mask_token_id` to `unk_token_id` now is revised to also handle added tokens [as follows](https://github.com/datquocnguyen/transformers/blob/f59b4afeb1af6551feac5d3214bbdf582ebbb098/src/transformers/models/bartpho/tokenization_bartpho_fast.py#L234-L242):\r\n\r\n\r\n```python\r\n\r\n ids = []\r\n for (id, token) in zip(e.ids, e.tokens):\r\n if id <= self.mask_token_id:\r\n ids.append(id)\r\n else:\r\n if token.strip() in added_vocab: # handle added tokens\r\n ids.append(added_vocab[token.strip()])\r\n else:\r\n ids.append(self.unk_token_id)\r\n```\r\n\r\nIn addition, [a preprocess of mapping ids](https://github.com/datquocnguyen/transformers/blob/f59b4afeb1af6551feac5d3214bbdf582ebbb098/src/transformers/models/bartpho/tokenization_bartpho_fast.py#L174-L197) Y, Y+1, Y+2, ... into X, X+1, X+2 is applied before decoding:\r\n\r\n```python\r\n def _decode(\r\n self,\r\n token_ids: Union[int, List[int]],\r\n skip_special_tokens: bool = False,\r\n clean_up_tokenization_spaces: bool = True,\r\n **kwargs\r\n ) -> str:\r\n self._decode_use_source_tokenizer = kwargs.pop(\"use_source_tokenizer\", False)\r\n\r\n\r\n if isinstance(token_ids, int):\r\n token_ids = [token_ids]\r\n\r\n\r\n # Mapping added tokens' ids into their original values\r\n _, id_mapping = self.get_added_vocab_hacking()\r\n if len(id_mapping) > 0:\r\n token_ids = [id_mapping[id] if id in id_mapping else id for id in token_ids]\r\n\r\n\r\n text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)\r\n\r\n\r\n if clean_up_tokenization_spaces:\r\n clean_text = self.clean_up_tokenization(text)\r\n return clean_text\r\n else:\r\n return text\r\n```\r\n\r\nWith this improved strategy, there are two tests needed to override:\r\n\r\n```python\r\n def test_tokenizer_fast_store_full_signature(self):\r\n \"\"\"\r\n Override the original test as BartphoTokenizer requires a monolingual_vocab_file rather than a merges_file\r\n \"\"\"\r\n```\r\n\r\n```python\r\ndef test_add_tokens_tokenizer(self):\r\n \"\"\"\r\n Override the original test as in the fast tokenizer, the actual vocab_size is in fact mask_token_id + 1\r\n \"\"\"\r\n```\r\n\r\n\r\n",
"> Hi @datquocnguyen. It's amazing that you added those two new fast tokenizers. However we need PRs to be focused on one thing. Would you terribly mind splitting it in three (one for BARTpho, one for PhoBERT and one for BERTweet)?\r\n> \r\n> Thanks a lot!\r\n\r\n@sgugger I changed the code, so that this PR is only for BARTpho. cc: @SaulLu ",
"@SaulLu please help to review [the improved strategy](https://github.com/huggingface/transformers/pull/17254#issuecomment-1139492485) and give feedback. Thank you very much.\r\n\r\nPlease note that failed checks are not related to my bartpho tokenizer, except for one check using `black`, however `black` was successful in my local computer, as detailed at https://github.com/huggingface/transformers/pull/17254#issuecomment-1133932067. Could you provide information about `black` used in your CI?, so I can replicate the issue on my local computer, then fix it. Thanks. \r\n\r\ncc: @sgugger \r\n\r\n```\r\n#!/bin/bash -eo pipefail\r\nblack --check --preview examples tests src utils\r\nSkipping .ipynb files as Jupyter dependencies are not installed.\r\nYou can fix this by running ``pip install black[jupyter]``\r\nwould reformat src/transformers/models/bartpho/tokenization_bartpho_fast.py\r\n\r\nOh no! ๐ฅ ๐ ๐ฅ\r\n1 file would be reformatted, 1594 files would be left unchanged.\r\n\r\nExited with code exit status 1\r\n\r\n```\r\n",
"You need to install `black==22.3` to have the same results as the CI.",
"> You need to install `black==22.3` to have the same results as the CI.\r\n\r\n@sgugger You might miss my https://github.com/huggingface/transformers/pull/17254#issuecomment-1133932067 I already had `black` version 22.3 as detailed in https://github.com/huggingface/transformers/pull/17254#issuecomment-1133932067.",
"@sgugger Following https://github.com/huggingface/transformers/pull/17254#issuecomment-1133932067, I was not aware that I have to include `--preview` into my command `black -l 119 <py_file_path>`. The code quality check is passed. \r\n\r\nThere are now 4 failed checks not caused by BartphoTokenizerFast, I believe:\r\n\r\n- FAILED tests/models/layoutlmv2/test_tokenization_layoutlmv2.py::LayoutLMv2TokenizationTest::test_saving_tokenizer_trainer\r\n====== 1 failed, 135 passed, 32 skipped, 20 warnings in 142.09s (0:02:22) ======\r\n- FAILED tests/pipelines/test_pipelines_summarization.py::SummarizationPipelineTests::test_small_model_pt\r\n- `run_tests_flax` = 804 failed, 5364 passed, 11260 skipped, 7960 warnings in 1306.16s (0:21:46) ==\r\n- `run_tests_torch` = 823 failed, 10738 passed, 6752 skipped, 5425 warnings in 1554.96s (0:25:54) ==\r\n\r\ncc: @SaulLu , @LysandreJik, @patil-suraj and @patrickvonplaten It would be great if you guys can also help review this PR. Thanks a lot.",
"You will need to rebase on the main branch to fix the test failures. It's due to the botched release of Protobuf that breaks everything (the main branch has it pinned).",
"@sgugger I rebased the main branch with the latest commits from `transformers`. \r\n\r\nThere are 3 failed checks not relevant to the BartphoTokenizer:\r\n\r\n- `Build PR Documentation / build / build_pr_documentation`: urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))\r\n- `run_tests_tf`: FAILED tests/models/mobilebert/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_resize_token_embeddings\r\n- `run_tests_torch`: FAILED tests/models/glpn/test_feature_extraction_glpn.py::GLPNFeatureExtractionTest::test_call_pytorch\r\n\r\nfyi, @SaulLu , @LysandreJik, @patil-suraj and @patrickvonplaten ",
"Hey @datquocnguyen, thanks a lot for your PR and for working hard on this! I think this is one situation where the code on the hub (detailed below) would fit really well, for the following reasons:\r\n\r\n- The tokenizer code that is defined seems to work with the vocabulary you have worked with so far, but is unlikely to work with other vocabularies. At least it won't be the case unless the approach you have taken to generate that vocabulary is very documented, as it is very complex when compared to other tokenizers.\r\n- If I understand correctly, the approach taken could have been handled by a single vocabulary rather than 2. I definitely understand why doing it like this for BARTpho makes sense, but this is unlikely to be the case for other checkpoints leveraging this architecture.\r\n- The code in `transformers` has to be maintained for years, so we want to optimize for heavily tested code; the methods that you add, while definitely useful in order to get the BARTpho tokenization right, are not tested and have `hacking` in their name, which shows that they're targetting something a bit different than what we aim to solve with `transformers`' internal code (but that definitely has its place on the hub!).\r\n\r\nFinally, you're moving very fast with your implementations, which is great. However, given the backwards-compatibility approach we have chosen and the fact that we want production-ready code means that we'll be slowing things down in this case, unfortunately.\r\n\r\n---\r\n\r\nThe code on the hub is explained [here](https://huggingface.co/docs/transformers/custom_models). It's a way to share models and configurations by sharing their modeling code directly on the hub. When doing `from_pretrained`, you can then fetch the code on the hub. BARTpho is exactly the kind of use-cases we had in mind when working on this feature - we just didn't get to implementing the tokenizer code yet! I think we should work to enable this ASAP and have BARTpho be a first trial.\r\n\r\nThis would enable you to move as fast as you need, while providing the same functionality to downstream `transformers` users, and will allow you to manage your repositories as you see fit. Would that work for you?",
"[@LysandreJik](https://github.com/LysandreJik) Thanks for your detailed feedback.\r\nBefore I go to answer whether the code on the hub would work for me.\r\nI am just concerning your first comment:\r\n\r\n> The tokenizer code that is defined seems to work with the vocabulary you have worked with so far, but is unlikely to work with other vocabularies. At least it won't be the case unless the approach you have taken to generate that vocabulary is very documented, as it is very complex when compared to other tokenizers.\r\n\r\nSo I would try to respond to this first comment. As detailed in [#13788 (comment)](https://github.com/huggingface/transformers/pull/13788#issuecomment-931908671), regarding the use case of BartphoTokenizer: Other languages can thus simply reuse BartphoTokenizer with their `monolingual_vocab_file`. The goal is to reduce the model sizes of existing pre-trained XLM-RoBERTa/mBART models when applying to a smaller set of languages instead of the whole 50/100 languages. Here, you would trim XLM-RoBERTa/mBART to just dealing with subwords in the `monolingual_vocab_file` while not requiring retraining the corresponding multilingual sentencepiece model. \r\n\r\nThe generation process of BARTpho vocabulary is not that very complicated, as detailed in [#13788 (comment)](https://github.com/huggingface/transformers/pull/13788#issuecomment-931908671). In particular, I apply a pre-trained/existing sentencepiece tokenization model from a pre-trained language model (e.g., XLM-RoBERTa/mBART/...) to segment sentences in a language/task-specific corpus, and then selected just top X (e.g. 40K) subwords to be included in a specific vocabulary for my downstream language/task (here, I named this specific vocabulary as `monolingual_vocab_file`). The existing sentencepiece model as well as the specific vocabulary are both required for a proper tokenization. \r\n\r\nRegarding BartphoTokenizerFast, the process of generating the `tokenizer_file` is that: (1) I load the slow BartphoTokenizer, (2) call the function `convert_slow_tokenizer` to convert it into a fast variant, and (3) then save the fast one. This might be a bit complicated for others as it is not well-documented, but I could simply abandon the use of `tokenizer_file` in BartphoTokenizerFast. Thus BartphoTokenizerFast would just create and convert a slow tokenizer BartphoTokenizer to build the backend.\r\n\r\nI believe there are many use cases in which BartphoTokenizer/BartphoTokenizerFast would fit.\r\n\r\n[@SaulLu](https://github.com/SaulLu) As you have been playing around with BartphoTokenizer, is there any comment from your side regarding [@LysandreJik](https://github.com/LysandreJik)' first point. Thank you both.",
"@LysandreJik \r\n> If I understand correctly, the approach taken could have been handled by a single vocabulary rather than 2.\r\n\r\nI am not sure this is the case.\r\nThe pre-trained (multilingual) sentencepiece model and the specific monolingual_vocab_file are both required for proper tokenization: the multilingual sentencepiece model is used for subword tokenization while all subwords that do not appear in the monolingual_vocab_file are converted into an unknown token. ",
"@LysandreJik I did dig into the code on the hub, and am wondering whether I understand your approach correctly:\r\n\r\n- Instead of merging `tokenization_bartpho_fast.py` into the main `transformers` branch, we now just need to upload/push it to `https://huggingface.co/vinai/bartpho-syllable/tree/main`. \r\n\r\n- There would be an upcoming feature of `sharing a custom tokenizer`, which I should register BartphoTokenizerFast from `vinai/bartpho-syllable` or `https://huggingface.co/vinai/bartpho-syllable/blob/main/tokenization_bartpho_fast.py`. Then it would allow users to automatically download or import `tokenization_bartpho_fast.py` and use BartphoTokenizerFast via AutoTokenizer with existing features in the main `transformers` branch. \r\n\r\nSo what I should do is to wait until you guys complete that `sharing a custom tokenizer` feature and then I would just need to have some piece of code for registering BartphoTokenizerFast with `register_for_auto_class('AutoTokenizer')` and it would run as the same as merged into the main `transformers` branch, wouldn't it? \r\n\r\nThanks.\r\n\r\ncc: @SaulLu ",
"For a wider context where many subwords appearing in the \"merges\" file do not appear in the \"vocab\" file as in CTRL, FlauBERT, PhoBERT and BERTweet and the like (i.e. slow tokenizers would convert those subwords into unkn_id during encoding), it is likely impossible to develop a fast tokenizer variant using documented approaches while keeping the same tokenization strategy.\r\n\r\nThus, the trick used in BartphoTokenizerFast would come into play, and help solve this issue. If merged, it is then straightforward to develop similar fast tokenizers for CTRL, FlauBERT, PhoBERT and BERTweet.\r\n\r\nIt would be great if @LysandreJik @SaulLu @patrickvonplaten or @sgugger could provide concrete feedback on whether this PR will have a chance to be merged. If this PR could not be merged, then what is the status of the \"sharing a custom tokenizer on the hub\" feature (e.g. tentative date for releasing this feature) ?\r\n\r\nThank you very much.",
"Hi @datquocnguyen ,\r\nI echo [Lysandre's answer](https://github.com/huggingface/transformers/pull/17254#issuecomment-1143221043): I thank you for working very hard for this PR :hugs: and I also think it would be a very good fit for the feature on the hub. And this addition will be really useful for the community!\r\n\r\n> It would run as the same as merged into the main transformers branch, wouldn't it?\r\n\r\nYes, the idea is that it would be (almost) identical to what you have with transformers! I don't know when it will be released (as I'm not directly working on it), but it seems to be a high-priority feature!\r\n\r\n> For a wider context where many subwords appearing in the \"merges\" file do not appear in the \"vocab\" file as in CTRL, FlauBERT, PhoBERT and BERTweet and the like (i.e. slow tokenizers would convert those subwords into unkn_id during encoding), it is likely impossible to develop a fast tokenizer variant using documented approaches while keeping the same tokenization strategy.\r\n\r\nIndeed, you raise a very good point. I have also observed that there are tokens listed in the merge rules that do not appear in the vocabulary for `FlauBERT` - and I believe you that this is also the case for `CTRL`, `PhoBERT` and `BERTweet`. Nevertheless, from my point of view, looking at `FlauBERT`'s code, the fix that seems to me the most suitable for our API (tokenizer slow โ converter โ tokenizer fast ) would be to clean up the merge file during the conversion step. This technique would indeed avoid having to modify the tokenizer fast core method(s). I've attached a snippet below that illustrates this idea. Am I missing something by thinking that this would achieve the desired final behaviour? \r\n\r\n------------\r\n_Snippet to illustrate the merges file \"clean up\" that I have in mind_\r\n\r\nTo test this snippet, we need to retrieved locally the `vocab.json` and `merges.txt` files of `FlauBERT`, for example by doing `git clone https://huggingface.co/flaubert/flaubert_base_cased`.\r\n\r\nThen, if we try to test to initialize a tokenizer fast (pure without the transformers's tokenizer wrapper for the moment), we observe that it raises an error\r\n```python\r\nfrom tokenizers import Tokenizer\r\nfrom tokenizers.models import BPE\r\nimport json\r\n\r\nfile_vocab = f\"flaubert_base_cased/vocab.json\"\r\nfile_merges = f\"content/flaubert_base_cased/merges.txt\"\r\n\r\nwith open(file_vocab) as f:\r\n vocab = json.load(f)\r\n\r\nwith open(file_merges) as f:\r\n merges = f.readlines()\r\n\r\nmerges = [merge.split(\" \") for merge in merges]\r\nmerges = [(merge[0], merge[1]) for merge in merges if len(merge)==3]\r\n\r\ntokenizer = Tokenizer(\r\n BPE(\r\n vocab,\r\n merges,\r\n unk_token=\"<unk>\",\r\n end_of_word_suffix=\"</w>\",\r\n fuse_unk=True,\r\n )\r\n )\r\n```\r\nError message:\r\n```bash\r\n---------------------------------------------------------------------------\r\nException Traceback (most recent call last)\r\n[<ipython-input-26-b70d9b50bd17>](https://localhost:8080/#) in <module>()\r\n 5 unk_token=\"<unk>\",\r\n 6 end_of_word_suffix=\"</w>\",\r\n----> 7 fuse_unk=True,\r\n 8 )\r\n 9 )\r\n\r\nException: Error while initializing BPE: Token `trouvรฉcap` out of vocabulary\r\n```\r\nBut by cleaning the merges file we can initialize the tokenizer without errors\r\n```python\r\n# ------ Clean up step ------\r\nnew_merges = []\r\nfor token_1, token_2 in merges:\r\n if token_1 not in vocab or token_2 not in vocab or f\"{token_1}{token_2}\" not in vocab:\r\n print(token_1, token_2)\r\n continue\r\n new_merges.append((token_1, token_2))\r\n# ---------------------------\r\n \r\ntokenizer = Tokenizer(\r\n BPE(\r\n vocab,\r\n new_merges,\r\n unk_token=\"<unk>\",\r\n end_of_word_suffix=\"</w>\",\r\n fuse_unk=True,\r\n )\r\n )\r\n```\r\n\r\n",
"@SaulLu Thanks for your response. \r\n\r\n> Am I missing something by thinking that this would achieve the desired final behaviour?\r\n\r\nCleaning the \"merges\" file will definitely result in different encoding outputs from the slow and fast tokenizers. For example, in the case of FlauBERT, the slow and fast tokenizers will encode/tokenize any word containing the sub-string `trouvรฉcap` differently.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am still looking forward to using the upcoming \"sharing a custom tokenizer\" feature =) ",
"> Cleaning the \"merges\" file will definitely result in different encoding outputs from the slow and fast tokenizers. For example, in the case of FlauBERT, the slow and fast tokenizers will encode/tokenize any word containing the sub-string trouvรฉcap differently.\r\n\r\nI'm sorry, I didn't react to your message! You are right, my proposal will not be exactly the same as the current slow version.\r\n\r\nOne specific thing to know about this particular case of FlauBERT is that currently the slow tokenizer doesn't behave exactly like FlauBERT's original tokenizer which used [FastBPE](https://github.com/glample/fastBPE).\r\n\r\nFor example, `trouvรฉcaptivantes` is not tokenized in the same way:\r\n```\r\nTransformers version: ['<s>', '<unk>', 'tiv', 'antes</w>', '</s>']\r\nFastBPE version: ['<s>', 'trouv', 'รฉcap', 'tiv', 'antes</w>', '</s>']\r\n```\r\n\r\nIdeally, we would like to have an exact match, but in this case I think the changes that would have to be made to achieve this would be very cumbersome compared to the difference observed (`trouvรฉcaptivantes` is not a word in French but the concatenation of 2 words, without typography we should have had `trouvรฉ captivantes`). All that to say, it's very very complicated to have perfect matching between different tokenization libraries and maintaining long-term hacks is not easy and that's why I think the sharing feature is really a perfect use case for your proposal! "
] | 1,652
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
This PR is to add a "fast" BARTpho tokenizer (backed by HuggingFace's *tokenizers* library).
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17254/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17254/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17254",
"html_url": "https://github.com/huggingface/transformers/pull/17254",
"diff_url": "https://github.com/huggingface/transformers/pull/17254.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17254.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17253
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17253/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17253/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17253/events
|
https://github.com/huggingface/transformers/pull/17253
| 1,235,974,967
|
PR_kwDOCUB6oc430mSt
| 17,253
|
Adding CVT Model
|
{
"login": "AnugunjNaman",
"id": 42839570,
"node_id": "MDQ6VXNlcjQyODM5NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/42839570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnugunjNaman",
"html_url": "https://github.com/AnugunjNaman",
"followers_url": "https://api.github.com/users/AnugunjNaman/followers",
"following_url": "https://api.github.com/users/AnugunjNaman/following{/other_user}",
"gists_url": "https://api.github.com/users/AnugunjNaman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnugunjNaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnugunjNaman/subscriptions",
"organizations_url": "https://api.github.com/users/AnugunjNaman/orgs",
"repos_url": "https://api.github.com/users/AnugunjNaman/repos",
"events_url": "https://api.github.com/users/AnugunjNaman/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnugunjNaman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@NielsRogge can you review it and suggest any further changes.\r\nI'm not sure how you want to modify `cls_token` part. Since modifying in `CvtStage` section (stopping the split) will change shape of hidden states (stored in all hidden states and passing of 4D shape for CNN in next layer).\r\n\r\nI leave that part to you on how you want to change it to pass it over different classes further.\r\n\r\nI have run the `make fix-copies` and done docstrings part. I think everything is done apart from change you wanted to make for `cls_token`.\r\n\r\n"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Add CvT Model for Vision Classification
Fixes #13158
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17253/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17253",
"html_url": "https://github.com/huggingface/transformers/pull/17253",
"diff_url": "https://github.com/huggingface/transformers/pull/17253.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17253.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17252
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17252/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17252/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17252/events
|
https://github.com/huggingface/transformers/issues/17252
| 1,235,940,575
|
I_kwDOCUB6oc5JqvTf
| 17,252
|
torch.cuda.amp.autocast not worhing in huggingface nlp models.
|
{
"login": "HaoKang-Timmy",
"id": 60107867,
"node_id": "MDQ6VXNlcjYwMTA3ODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/60107867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaoKang-Timmy",
"html_url": "https://github.com/HaoKang-Timmy",
"followers_url": "https://api.github.com/users/HaoKang-Timmy/followers",
"following_url": "https://api.github.com/users/HaoKang-Timmy/following{/other_user}",
"gists_url": "https://api.github.com/users/HaoKang-Timmy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaoKang-Timmy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaoKang-Timmy/subscriptions",
"organizations_url": "https://api.github.com/users/HaoKang-Timmy/orgs",
"repos_url": "https://api.github.com/users/HaoKang-Timmy/repos",
"events_url": "https://api.github.com/users/HaoKang-Timmy/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaoKang-Timmy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
Hi, I am trying to finetune my roberta-base model to RTE datasets using fp16.
But it seems `torch.cuda.autocast' does not work in huggingface nlp models. The output of model is `torch.float32` and there is no memory saving.
My code is below
And is there any way of showing a example of training nlp models using fp16 without Trainer?
```
### Who can help?
@LysandreJik @JetRunner
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch.nn as nn
import time
import torch
from datasets import load_dataset
from transformers import get_scheduler
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from datasets import load_metric
import torch.multiprocessing as mp
import torch.distributed as dist
import argparse
import os
from torch.optim import AdamW
from torch.cuda.amp import GradScaler, autocast
parser = argparse.ArgumentParser(description="PyTorch nlp Training")
parser.add_argument("--log", default="./test.txt", type=str)
parser.add_argument("--dataset", default="rte", type=str)
parser.add_argument("--lr", default=2e-5, type=float)
parser.add_argument("--epochs", default=20, type=int)
parser.add_argument("--task", default="rte", type=str)
parser.add_argument("--batches", default=8, type=int)
parser.add_argument("--workers", default=4, type=int)
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mnli-mm": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
def main():
args = parser.parse_args()
mp.spawn(main_worker, nprocs=args.workers, args=(args.workers, args))
def main_worker(rank, process_num, args):
dist.init_process_group(
backend="nccl", init_method="tcp://127.0.0.1:1237", world_size=4, rank=rank
)
# dataset dataloaer
os.environ["TOKENIZERS_PARALLELISM"] = "true"
train_dataset = load_dataset("glue", args.task, split="train")
val_dataset = load_dataset("glue", args.task, split="validation")
sentence1_key, sentence2_key = task_to_keys[args.task]
tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=True)
# sentence1_key, sentence2_key = task_to_keys["cola"]
def encode(examples):
if sentence2_key is not None:
return tokenizer(
examples[sentence1_key],
examples[sentence2_key],
truncation=True,
padding="max_length",
max_length=128,
)
return tokenizer(
examples[sentence1_key],
truncation=True,
padding="max_length",
max_length=128,
)
train_dataset = train_dataset.map(encode, batched=True)
val_dataset = val_dataset.map(encode, batched=True)
val_dataset = val_dataset.map(
lambda examples: {"labels": examples["label"]}, batched=True
)
train_dataset = train_dataset.map(
lambda examples: {"labels": examples["label"]}, batched=True
)
train_dataset.set_format(
type="torch", columns=["input_ids", "labels", "attention_mask"]
)
val_dataset.set_format(
type="torch", columns=["input_ids", "labels", "attention_mask"]
)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
train_dataloader = torch.utils.data.DataLoader(
train_dataset,
batch_size=8,
num_workers=12,
pin_memory=True,
drop_last=True,
shuffle=False,
sampler=train_sampler,
)
val_dataloader = torch.utils.data.DataLoader(
val_dataset,
batch_size=8,
num_workers=12,
pin_memory=True,
drop_last=True,
shuffle=False,
)
# metric
metric_mat = load_metric("glue", args.task)
metric_acc = load_metric("accuracy")
# model
epochs = args.epochs
model = AutoModelForSequenceClassification.from_pretrained("roberta-base")
model = model.to(rank)
optimizer = AdamW(
[{"params": model.parameters()}],
lr=args.lr,
)
model = torch.nn.parallel.DistributedDataParallel(model)
lr_scheduler = get_scheduler(
name="polynomial",
optimizer=optimizer,
num_warmup_steps=500,
num_training_steps=epochs * len(train_dataloader),
)
criterion = nn.CrossEntropyLoss().to(rank)
scaler = GradScaler()
for epoch in range(epochs):
model.train()
train_loss = 0.0
train_acc1 = 0.0
time_avg = 0.0
train_sampler.set_epoch(epoch)
for i, batch in enumerate(train_dataloader):
optimizer.zero_grad()
start = time.time()
batch = {k: v.to(rank) for k, v in batch.items()}
with autocast():
outputs = model(batch["input_ids"], batch["attention_mask"])
logits = outputs.logits
# batch["labels"] = batch["labels"].type(torch.float16)
loss = criterion(logits, batch["labels"])
pred = torch.argmax(logits, dim=1)
acc = metric_acc.compute(predictions=pred, references=batch["labels"])
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
```
Train the code with 4 GPU. Actually even with one gpu it shows no difference.
### Expected behavior
```shell
using Pytorch amp.autocast should save memory and gain effiency. But it seems not.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17252/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17251
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17251/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17251/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17251/events
|
https://github.com/huggingface/transformers/issues/17251
| 1,235,765,161
|
I_kwDOCUB6oc5JqEep
| 17,251
|
Support MobileBert model in transformer.onnx package
|
{
"login": "YUNQIUGUO",
"id": 35738743,
"node_id": "MDQ6VXNlcjM1NzM4NzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/35738743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YUNQIUGUO",
"html_url": "https://github.com/YUNQIUGUO",
"followers_url": "https://api.github.com/users/YUNQIUGUO/followers",
"following_url": "https://api.github.com/users/YUNQIUGUO/following{/other_user}",
"gists_url": "https://api.github.com/users/YUNQIUGUO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YUNQIUGUO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YUNQIUGUO/subscriptions",
"organizations_url": "https://api.github.com/users/YUNQIUGUO/orgs",
"repos_url": "https://api.github.com/users/YUNQIUGUO/repos",
"events_url": "https://api.github.com/users/YUNQIUGUO/events{/privacy}",
"received_events_url": "https://api.github.com/users/YUNQIUGUO/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"It seems it is supported now: https://github.com/huggingface/transformers/pull/17029",
"Thanks for the pointer to this pr!"
] | 1,652
| 1,657
| 1,656
|
NONE
| null |
### Feature request
Just wondering would it be possible to support mobilebert in transformer.onnx package? or is there any quick hack that we can try to export the mobilebert model from huggingface to onnx?
Thanks.
The mobile I am trying is : `google/mobilebert-uncased`
And the command : `python -m transformers.onnx --model=google/mobilebert-uncased onnx/`
`raise KeyError(
KeyError: "mobilebert is not supported yet. Only ['albert', 'bart', 'mbart', 'bert', 'ibert', 'camembert', 'distilbert', 'flaubert', 'marian', 'm2m-100', 'roberta', 't5', 'xlm-roberta', 'gpt2', 'gpt-j', 'gpt-neo', 'layoutlm', 'electra', 'vit', 'beit', 'blenderbot', 'blenderbot-small'] are supported. If you want to support mobilebert please propose a PR or open up an issue."`
### Motivation
Trying to get mobileBert model exported to onnx format to further investigate and for the usage of some ORT mobile scenarios.
### Your contribution
A PR is not available for now.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17251/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17250
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17250/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17250/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17250/events
|
https://github.com/huggingface/transformers/pull/17250
| 1,235,613,540
|
PR_kwDOCUB6oc43zgqi
| 17,250
|
Automatically sort auto mappings
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
This PR introduces a new script to automatically sort all the mappings in the auto modules alphabetically. It fixes/checks it with the usual `make style`/`make quality`/`make fixup` and a new step in the check code quality job of the CI enforces it has properly been applied.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17250/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17250",
"html_url": "https://github.com/huggingface/transformers/pull/17250",
"diff_url": "https://github.com/huggingface/transformers/pull/17250.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17250.patch",
"merged_at": 1652721860000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17249
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17249/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17249/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17249/events
|
https://github.com/huggingface/transformers/pull/17249
| 1,235,581,045
|
PR_kwDOCUB6oc43zZ-M
| 17,249
|
Fix test_model_parallelization
|
{
"login": "lkm2835",
"id": 30465912,
"node_id": "MDQ6VXNlcjMwNDY1OTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/30465912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkm2835",
"html_url": "https://github.com/lkm2835",
"followers_url": "https://api.github.com/users/lkm2835/followers",
"following_url": "https://api.github.com/users/lkm2835/following{/other_user}",
"gists_url": "https://api.github.com/users/lkm2835/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkm2835/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkm2835/subscriptions",
"organizations_url": "https://api.github.com/users/lkm2835/orgs",
"repos_url": "https://api.github.com/users/lkm2835/repos",
"events_url": "https://api.github.com/users/lkm2835/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkm2835/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Change is fine by me! @sgugger @stas00 what do you think? "
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
When number of gpus is greater than ```len(model.device_map.keys())```, exceptional case happens.
Fixes #17248
## Who can review?
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17249/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17249",
"html_url": "https://github.com/huggingface/transformers/pull/17249",
"diff_url": "https://github.com/huggingface/transformers/pull/17249.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17249.patch",
"merged_at": 1652736649000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17248
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17248/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17248/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17248/events
|
https://github.com/huggingface/transformers/issues/17248
| 1,235,561,413
|
I_kwDOCUB6oc5JpSvF
| 17,248
|
gpt2 model parallelization test failed
|
{
"login": "lkm2835",
"id": 30465912,
"node_id": "MDQ6VXNlcjMwNDY1OTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/30465912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkm2835",
"html_url": "https://github.com/lkm2835",
"followers_url": "https://api.github.com/users/lkm2835/followers",
"following_url": "https://api.github.com/users/lkm2835/following{/other_user}",
"gists_url": "https://api.github.com/users/lkm2835/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkm2835/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkm2835/subscriptions",
"organizations_url": "https://api.github.com/users/lkm2835/orgs",
"repos_url": "https://api.github.com/users/lkm2835/repos",
"events_url": "https://api.github.com/users/lkm2835/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkm2835/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.13.0-1015-gcp-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
```
### Who can help?
In main branch, ``` pytest tests/models/gpt2/test_modeling_gpt2.py ``` failed at ```test_model_parallelization```.
```
def test_model_parallelization(self):
...
# Assert that the memory use on all devices is higher than it was when loaded only on CPU
for n in range(torch.cuda.device_count()):
> self.assertGreater(memory_after_parallelization[n], memory_at_start[n])
E AssertionError: 0 not greater than 0
tests/test_modeling_common.py:2069: AssertionError
```
I'm implementing model parallelization on OPT, but there is the same problem. (https://github.com/huggingface/transformers/pull/17245)
But, it works on trainer.
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
``` pytest tests/models/gpt2/test_modeling_gpt2.py ```
### Expected behavior
```shell
memory_at_start : [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
memory_after_parallelization : [179, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 0, 0, 0, 0]
Number of my gpu devices is 16. but len(self.h) at gpt2 is 12.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17248/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17247
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17247/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17247/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17247/events
|
https://github.com/huggingface/transformers/pull/17247
| 1,235,549,655
|
PR_kwDOCUB6oc43zTrz
| 17,247
|
Add support for pretraining recurring span selection to Splinter
|
{
"login": "jvcop",
"id": 4559066,
"node_id": "MDQ6VXNlcjQ1NTkwNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4559066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jvcop",
"html_url": "https://github.com/jvcop",
"followers_url": "https://api.github.com/users/jvcop/followers",
"following_url": "https://api.github.com/users/jvcop/following{/other_user}",
"gists_url": "https://api.github.com/users/jvcop/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jvcop/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvcop/subscriptions",
"organizations_url": "https://api.github.com/users/jvcop/orgs",
"repos_url": "https://api.github.com/users/jvcop/repos",
"events_url": "https://api.github.com/users/jvcop/events{/privacy}",
"received_events_url": "https://api.github.com/users/jvcop/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for reviewing the PR! :) I added the suggested changes.\r\n\r\n@jvcop will add an answer to this https://github.com/huggingface/transformers/pull/17247#discussion_r873691809",
"As far as I can see all comments have been addressed - merging! Thanks a lot for your work here @jvcop !",
"Thanks a lot for the fast review! And to @tobigue who was an integral part of this :tada: "
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
This pull request aims to add support for recurring span selection pretraining as proposed by the authors of [Splinter](https://arxiv.org/abs/2101.00438). The pretraining objective differs from the question answering task in a couple of ways:
- There is not a question, but a number of question tokens replacing recurring spans.
- The shape of `start_positions` and `end_positions` is `(batch_size, num_questions)` instead of `(batch_size, )`.
- The shape of `start_logits` and `end_logits` is `(batch_size, num_questions, sequence_length)` instead of `(batch_size, sequence_length)`.
- The loss should ignore zero positions, i.e. `ignore_index=0`. Zeros are used in the original code to denote padded question tokens and their start and end positions.
To this end, we added `SplinterForPreTraining`.
Minimal training example:
```python
import torch
from torch.utils.data import IterableDataset
from transformers import SplinterConfig
from transformers import SplinterForPreTraining
from transformers import Trainer
from transformers import TrainingArguments
class QuestionAnsweringDataset(IterableDataset):
def __iter__(self):
yield {
"input_ids": torch.tensor([101, 104, 123, 456, 104, 234, 567, 102]),
"attention_mask": torch.tensor([1, 1, 1, 1, 1, 1, 1, 1]),
"token_type_ids": torch.tensor([0, 0, 0, 0, 0, 0, 0, 0]),
"question_positions": torch.tensor([1, 4]),
"start_positions": torch.tensor([2, 5]),
"end_positions": torch.tensor([3, 6]),
}
config = SplinterConfig()
model = SplinterForPreTraining(config)
dataset = QuestionAnsweringDataset()
trainer = Trainer(
model=model,
args=TrainingArguments(max_steps=3, output_dir="/tmp"),
train_dataset=dataset,
)
trainer.train()
```
CC @tobigue
@patil-suraj @LysandreJik @patrickvonplaten @oriram
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17247/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17247",
"html_url": "https://github.com/huggingface/transformers/pull/17247",
"diff_url": "https://github.com/huggingface/transformers/pull/17247.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17247.patch",
"merged_at": 1652823734000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17246
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17246/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17246/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17246/events
|
https://github.com/huggingface/transformers/pull/17246
| 1,235,495,173
|
PR_kwDOCUB6oc43zIBX
| 17,246
|
Add PR title to push CI report
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
As title. The current effect looks like
<img width="512" alt="Screenshot 2022-05-13 191853" src="https://user-images.githubusercontent.com/2521628/168335363-ddb06fb3-4c3e-40ea-8b3a-2a82e9402c38.png">
I need to figure out a way to add link.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17246/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17246",
"html_url": "https://github.com/huggingface/transformers/pull/17246",
"diff_url": "https://github.com/huggingface/transformers/pull/17246.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17246.patch",
"merged_at": 1652471441000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17245
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17245/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17245/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17245/events
|
https://github.com/huggingface/transformers/pull/17245
| 1,235,464,089
|
PR_kwDOCUB6oc43zBXg
| 17,245
|
Add OPT model parallelize
|
{
"login": "lkm2835",
"id": 30465912,
"node_id": "MDQ6VXNlcjMwNDY1OTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/30465912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkm2835",
"html_url": "https://github.com/lkm2835",
"followers_url": "https://api.github.com/users/lkm2835/followers",
"following_url": "https://api.github.com/users/lkm2835/following{/other_user}",
"gists_url": "https://api.github.com/users/lkm2835/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkm2835/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkm2835/subscriptions",
"organizations_url": "https://api.github.com/users/lkm2835/orgs",
"repos_url": "https://api.github.com/users/lkm2835/repos",
"events_url": "https://api.github.com/users/lkm2835/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkm2835/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17245). All of your documentation changes will be reflected on that endpoint.",
"Hey @lkm2835,\r\n\r\nThanks for your PR - however this way of parallelizing the model is a bit outdated. The recommended way of using the model in parallel is to use `accelerate` see: https://twitter.com/huggingface/status/1524783489593360385\r\n\r\nWe'll soon have this natively supported in `transformers` as well cc @sgugger ",
"Then, is it better to close this PR?",
"> natively\r\n\r\nI'm afraid so! There are lots of other \"Good first issues\" or \"Good second issues\" though if you'd like to give it a try :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,652
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# Model Parallelism for OPT
Added ```parallelize``` and ```deparallelize``` methods on ```OPTDecoder```, ```OPTModel``` and ```OPTForCausalLM```.
Referred to ```gpt2``` model parallelize (https://github.com/huggingface/transformers/pull/8696).
Fixes #17240
## Who can review?
Let me know if you need any modifications, @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17245/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17245",
"html_url": "https://github.com/huggingface/transformers/pull/17245",
"diff_url": "https://github.com/huggingface/transformers/pull/17245.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17245.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17244
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17244/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17244/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17244/events
|
https://github.com/huggingface/transformers/issues/17244
| 1,235,443,098
|
I_kwDOCUB6oc5Jo12a
| 17,244
|
Error in Loading the Feature extractor
|
{
"login": "guneetsk99",
"id": 43180442,
"node_id": "MDQ6VXNlcjQzMTgwNDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/43180442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guneetsk99",
"html_url": "https://github.com/guneetsk99",
"followers_url": "https://api.github.com/users/guneetsk99/followers",
"following_url": "https://api.github.com/users/guneetsk99/following{/other_user}",
"gists_url": "https://api.github.com/users/guneetsk99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guneetsk99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guneetsk99/subscriptions",
"organizations_url": "https://api.github.com/users/guneetsk99/orgs",
"repos_url": "https://api.github.com/users/guneetsk99/repos",
"events_url": "https://api.github.com/users/guneetsk99/events{/privacy}",
"received_events_url": "https://api.github.com/users/guneetsk99/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This is fixed by #17239\r\nWill make a patch for PyPi.",
"Patched on pypi!"
] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2022-05-13 16:11:52.801587: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.18.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
```
### Who can help?
@sgugger @LysandreJik

I am facing this error
I am not able to figure out how to get past it. Till yesterday I was running the same code and didnt face any errors but today when I ran it to reproduce the results this happened can you please help
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
from hugsvision.nnet.VisionClassifierTrainer import VisionClassifierTrainer
from transformers import AutoFeatureExtractor, SwinForImageClassification
from transformers import ViTFeatureExtractor, ViTForImageClassification
from transformers import BeitFeatureExtractor, BeitForImageClassification
from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationLearned
trainer = VisionClassifierTrainer(
model_name = "ViT-Model",
train = train,
test = test,
output_dir = "./out/",
max_epochs = 5,
batch_size = 16, # On RTX 2080 Ti
lr = 0.0003,
fp16 = True,
model = ConvNextForImageClassification.from_pretrained(
huggingface_model,
num_labels = 5,
label2id = label2id,
id2label = id2label,
use_auth_token=True,
ignore_mismatched_sizes=True
),
feature_extractor = ConvNextFeatureExtractor.from_pretrained(
huggingface_model,
),
)
```
### Expected behavior
```shell
I want to know why this error arose which till yesterday didnt even exist.
Just to make a note Hugsvision didnt update their code base in past one day hence I reached out to you all for help
Thanks in advacne
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17244/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17243
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17243/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17243/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17243/events
|
https://github.com/huggingface/transformers/pull/17243
| 1,235,435,536
|
PR_kwDOCUB6oc43y7Rd
| 17,243
|
install dev. version of accelerate in docker file
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,654
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
With an offline discussion with @muellerzr regarding this CI failure
```
tests/trainer/test_trainer.py::TrainerIntegrationTest::test_auto_batch_size_finder
(line 776) ImportError:
```
I update the docker file in this PR. Will try to build the new docker image once this PR is merged.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17243/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17243/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17243",
"html_url": "https://github.com/huggingface/transformers/pull/17243",
"diff_url": "https://github.com/huggingface/transformers/pull/17243.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17243.patch",
"merged_at": 1652464029000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17242
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17242/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17242/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17242/events
|
https://github.com/huggingface/transformers/pull/17242
| 1,235,423,381
|
PR_kwDOCUB6oc43y4vN
| 17,242
|
Quick fix for push CI report channel
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
As title.
I can definitely use
```
if ci_event == "scheduled":
...
else:
...
```
Let me know if you prefer that approach, @LysandreJik .
(I updated the secret - despite the channel ID might just be the same as before)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17242/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17242",
"html_url": "https://github.com/huggingface/transformers/pull/17242",
"diff_url": "https://github.com/huggingface/transformers/pull/17242.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17242.patch",
"merged_at": 1652468397000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17241
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17241/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17241/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17241/events
|
https://github.com/huggingface/transformers/issues/17241
| 1,235,411,070
|
I_kwDOCUB6oc5JouB-
| 17,241
|
Question answering pipeline: error for long text sequences when `max_seq_len` is specified
|
{
"login": "ATroxler",
"id": 40578555,
"node_id": "MDQ6VXNlcjQwNTc4NTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/40578555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ATroxler",
"html_url": "https://github.com/ATroxler",
"followers_url": "https://api.github.com/users/ATroxler/followers",
"following_url": "https://api.github.com/users/ATroxler/following{/other_user}",
"gists_url": "https://api.github.com/users/ATroxler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ATroxler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ATroxler/subscriptions",
"organizations_url": "https://api.github.com/users/ATroxler/orgs",
"repos_url": "https://api.github.com/users/ATroxler/repos",
"events_url": "https://api.github.com/users/ATroxler/events{/privacy}",
"received_events_url": "https://api.github.com/users/ATroxler/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I think this is expected, not a bug. The `max_seq_length` of `distilbert` is 512. Set `max_seq_length` to be larger than 512 essentially disable the truncation. When you feed a text than 512 tokens, it will raise this error",
"No, according ot the documentation of `transformers.QuestionAnsweringPipeline.__call__`, the parameter `max_seq_len` is the maximum length of the total sentence (context + question) *after tokenization*.\r\nThe context will be split in several chunks if needed, i.e., if it is longer than the maximum sequence length of the model.\r\nAnd this is the way the pipeline behaved up to `transformers==4.17.0`.\r\nThis feature is useful to process long sequences (longer than model length).",
"@ATroxler \r\n\r\nI am not sure this was changed since 4.17.0 since the diff does concern this parameter\r\n\r\n```diff\r\ndiff --git a/src/transformers/pipelines/question_answering.py b/src/transformers/pipelines/question_answering.py\r\nindex efab83b92..bbffa3471 100644\r\n--- a/src/transformers/pipelines/question_answering.py\r\n+++ b/src/transformers/pipelines/question_answering.py\r\n@@ -5,10 +5,9 @@ from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union\r\n import numpy as np\r\n \r\n from ..data import SquadExample, SquadFeatures, squad_convert_examples_to_features\r\n-from ..file_utils import PaddingStrategy, add_end_docstrings, is_tf_available, is_torch_available\r\n from ..modelcard import ModelCard\r\n from ..tokenization_utils import PreTrainedTokenizer\r\n-from ..utils import logging\r\n+from ..utils import PaddingStrategy, add_end_docstrings, is_tf_available, is_torch_available, logging\r\n from .base import PIPELINE_INIT_ARGS, ArgumentHandler, ChunkPipeline\r\n \r\n \r\n@@ -302,11 +301,6 @@ class QuestionAnsweringPipeline(ChunkPipeline):\r\n ]\r\n )\r\n \r\n- # keep the cls_token unmasked (some models use it to indicate unanswerable questions)\r\n- if self.tokenizer.cls_token_id is not None:\r\n- cls_index = np.nonzero(encoded_inputs[\"input_ids\"] == self.tokenizer.cls_token_id)\r\n- p_mask[cls_index] = 0\r\n-\r\n features = []\r\n for span_idx in range(num_spans):\r\n input_ids_span_idx = encoded_inputs[\"input_ids\"][span_idx]\r\n@@ -316,6 +310,11 @@ class QuestionAnsweringPipeline(ChunkPipeline):\r\n token_type_ids_span_idx = (\r\n encoded_inputs[\"token_type_ids\"][span_idx] if \"token_type_ids\" in encoded_inputs else None\r\n )\r\n+ # keep the cls_token unmasked (some models use it to indicate unanswerable questions)\r\n+ if self.tokenizer.cls_token_id is not None:\r\n+ cls_indices = np.nonzero(np.array(input_ids_span_idx) == self.tokenizer.cls_token_id)[0]\r\n+ for cls_index in cls_indices:\r\n+ p_mask[span_idx][cls_index] = 0\r\n submask = p_mask[span_idx]\r\n if isinstance(submask, np.ndarray):\r\n submask = submask.tolist()\r\n@@ -399,8 +398,11 @@ class QuestionAnsweringPipeline(ChunkPipeline):\r\n end_ = np.where(undesired_tokens_mask, -10000.0, end_)\r\n \r\n # Normalize logits and spans to retrieve the answer\r\n- start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))\r\n- end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))\r\n+ start_ = np.exp(start_ - start_.max(axis=-1, keepdims=True))\r\n+ start_ = start_ / start_.sum()\r\n+\r\n+ end_ = np.exp(end_ - end_.max(axis=-1, keepdims=True))\r\n+ end_ = end_ / end_.sum()\r\n \r\n if handle_impossible_answer:\r\n min_null_score = min(min_null_score, (start_[0, 0] * end_[0, 0]).item())\r\n```\r\n\r\nHowever there was indeed a bugfix in 4.17.0 (from 4.16.0) where `max_seq_len` was not passed (so it was basically ignored).\r\nWhen ignored (or not passed) `max_seq_len == min(self.tokenizer.max_seq_len, 384)` even before that (not sure which version bug much earlier) it was always 384.\r\n\r\n\r\n`max_seq_len` corresponds to the maximum length of a single chunk (question + context chunk), and it will chunk indeed if full_context is too long.\r\n\r\nSo @sijunhe is indeed correct here.\r\n\r\nRereading the documentation of this parameter I can understand the confusion.\r\n\r\n```\r\n The maximum length of the total sentence (context + question) after tokenization. The context will be\r\n split in several chunks (using `doc_stride`) if needed.\r\n```\r\n\r\nWould something like :\r\n```\r\n The maximum length of the total sentence (context + question) of each chunk passed to the model. If the context is too large, it will be split in several chunks (using `doc_stride` as overlap length) if needed.\r\n```\r\n\r\nBe more understandable ?\r\n\r\nThe `(context + question)` wants to convey that if the question is taking too much space, then there's less room for context so you need to be careful about extremely long questions.\r\n\r\nIs the problem clearer to you ? Do you have any suggestions to improve even further the docs ?\r\n\r\nAlso if I understand correctly you want to limit the amount of context fed to your model right (not just chunking but really ignoring part of the text you send, which usually we try to avoid since you are sending it :) ) ? May I ask why you want to do so ? Are you using documents of arbitrary length and know that the answer should be in the beginning for instance ?\r\n\r\nThe idea is just to figure out how we could maybe cook up a nice option for this use case (while keeping the others understandable too)\r\n ",
"Many thanks, @Narsil , for clarification.\r\nIndeed, changing the documentation in the suggested manner would avoid the confusion.\r\n\r\nMy use case is to use the QA pipeline as a pre-proessor for a sequence classification task:\r\n* STEP 1 - apply QA pipeline to extract from long input sequences the parts relevant to the task.\r\n* STEP 2 - apply sequence classification to the extract\r\n\r\nWith my interpretation of the existing documentation, I had been under the impression that the sequences are truncated by default to a length of 384, which I wanted to override by specifying `max_seq_len=2000`.\r\nThanks to your explanation I understand now that I can simply omit the parameter `max_seq_len`, and the pipeline behaves eactly the way I want, i.e. it breaks the long context into chunks.\r\n\r\n**Example:**\r\n```python\r\n!pip install transformers==4.18.0\r\nfrom transformers import pipeline\r\ncontext = 100 * \"This part of the text is totally useless. \" + \"The quick brown fox jumps over the lazy dog.\"\r\nqa_pipeline = pipeline(\"question-answering\")\r\nqa_pipeline(question=\"what does the fox do?\", context=context)\r\n```\r\n**Result:**\r\n```\r\n{'answer': 'jumps over the lazy dog',\r\n 'end': 4643,\r\n 'score': 0.639270007610321,\r\n 'start': 4620}\r\n```\r\n:-)",
"@ATroxler I opened an issue to clarify this documentation ! Thanks for raising the issue, and glad it works as intended !"
] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.17.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
**code:**
```python
#!pip install transformers==4.16.0
!pip install transformers==4.17.0
from transformers import pipeline
context = 100 * "The quick brown fox jumps over the lazy dog. "
qa_pipeline = pipeline("question-answering", max_seq_len=2000)
qa_pipeline(question="what does the fox do?", context=context)
```
**exception traceback:**
```
No model was supplied, defaulted to distilbert-base-cased-distilled-squad (https://huggingface.co/distilbert-base-cased-distilled-squad)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-4-d1e4a860038f>](https://localhost:8080/#) in <module>()
1 qa_pipeline = pipeline("question-answering", max_seq_len=2000)
----> 2 qa_pipeline(question="what does the fox do?", context=context)
10 frames
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/question_answering.py](https://localhost:8080/#) in __call__(self, *args, **kwargs)
249 examples = self._args_parser(*args, **kwargs)
250 if len(examples) == 1:
--> 251 return super().__call__(examples[0], **kwargs)
252 return super().__call__(examples, **kwargs)
253
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in __call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1025 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
1026 else:
-> 1027 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
1028
1029 def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params):
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
1047 all_outputs = []
1048 for model_inputs in self.preprocess(inputs, **preprocess_params):
-> 1049 model_outputs = self.forward(model_inputs, **forward_params)
1050 all_outputs.append(model_outputs)
1051 outputs = self.postprocess(all_outputs, **postprocess_params)
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in forward(self, model_inputs, **forward_params)
942 with inference_context():
943 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)
--> 944 model_outputs = self._forward(model_inputs, **forward_params)
945 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu"))
946 else:
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/question_answering.py](https://localhost:8080/#) in _forward(self, inputs)
369 example = inputs["example"]
370 model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names}
--> 371 start, end = self.model(**model_inputs)[:2]
372 return {"start": start, "end": end, "example": example, **inputs}
373
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/distilbert/modeling_distilbert.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, start_positions, end_positions, output_attentions, output_hidden_states, return_dict)
853 output_attentions=output_attentions,
854 output_hidden_states=output_hidden_states,
--> 855 return_dict=return_dict,
856 )
857 hidden_states = distilbert_output[0] # (bs, max_query_len, dim)
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/distilbert/modeling_distilbert.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
546
547 if inputs_embeds is None:
--> 548 inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim)
549 return self.transformer(
550 x=inputs_embeds,
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/distilbert/modeling_distilbert.py](https://localhost:8080/#) in forward(self, input_ids)
131 position_embeddings = self.position_embeddings(position_ids) # (bs, max_seq_length, dim)
132
--> 133 embeddings = word_embeddings + position_embeddings # (bs, max_seq_length, dim)
134 embeddings = self.LayerNorm(embeddings) # (bs, max_seq_length, dim)
135 embeddings = self.dropout(embeddings) # (bs, max_seq_length, dim)
RuntimeError: The size of tensor a (1009) must match the size of tensor b (512) at non-singleton dimension 1
```
### Expected behavior
```shell
Run through and produce a result similar to the following, like with transformers 4.16.0
{'answer': 'The quick brown fox jumps over the lazy dog',
'end': 3418,
'score': 0.017251048237085342,
'start': 3375}
```
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17241/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17240
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17240/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17240/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17240/events
|
https://github.com/huggingface/transformers/issues/17240
| 1,235,405,788
|
I_kwDOCUB6oc5Josvc
| 17,240
|
Distributed Support for OPT models in transformers
|
{
"login": "Mrs-Hudson",
"id": 7013661,
"node_id": "MDQ6VXNlcjcwMTM2NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7013661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mrs-Hudson",
"html_url": "https://github.com/Mrs-Hudson",
"followers_url": "https://api.github.com/users/Mrs-Hudson/followers",
"following_url": "https://api.github.com/users/Mrs-Hudson/following{/other_user}",
"gists_url": "https://api.github.com/users/Mrs-Hudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mrs-Hudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mrs-Hudson/subscriptions",
"organizations_url": "https://api.github.com/users/Mrs-Hudson/orgs",
"repos_url": "https://api.github.com/users/Mrs-Hudson/repos",
"events_url": "https://api.github.com/users/Mrs-Hudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mrs-Hudson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@patrickvonplaten rasising a feature request for model parallelism for the newly added OPT models. Please triage/comment as you see fit",
"Same answer as in https://github.com/huggingface/transformers/pull/17245#issuecomment-1128064880 here. Think we'll soon have something that works out of the box cc @sgugger ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,652
| 1,655
| 1,655
|
NONE
| null |
### Feature request
Hi
Thanks a lot for adding OPT models to transformers. As of now the 55GB 30B parameter model needs to be loaded into a single GPU, otherwise it t[hrows a CUDA memory error](https://discuss.huggingface.co/t/running-inference-on-opt-30m-on-gpu/17895/2).
It would be great if we could have distributed support for these[ models similar to gpt2](https://github.com/huggingface/transformers/pull/7772) so we can leverage multiple GPUs to run them
### Motivation
Make it easier to run OPT models on available infra
### Your contribution
NA,
can test changes on my system
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17240/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17239
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17239/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17239/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17239/events
|
https://github.com/huggingface/transformers/pull/17239
| 1,235,401,234
|
PR_kwDOCUB6oc43y0H9
| 17,239
|
Fix Trainer for Datasets that don't have dict items
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"You're welcome "
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
This PR fixes a break in `Trainer` when the dataset items are not dictionaries.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17239/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17239",
"html_url": "https://github.com/huggingface/transformers/pull/17239",
"diff_url": "https://github.com/huggingface/transformers/pull/17239.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17239.patch",
"merged_at": 1652456963000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17238
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17238/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17238/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17238/events
|
https://github.com/huggingface/transformers/pull/17238
| 1,235,379,662
|
PR_kwDOCUB6oc43yvic
| 17,238
|
[WIP] Use word_ids to determine if a pre-entity is a subword in TokenClassificationPipeline
|
{
"login": "barar-primer",
"id": 46694100,
"node_id": "MDQ6VXNlcjQ2Njk0MTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/46694100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/barar-primer",
"html_url": "https://github.com/barar-primer",
"followers_url": "https://api.github.com/users/barar-primer/followers",
"following_url": "https://api.github.com/users/barar-primer/following{/other_user}",
"gists_url": "https://api.github.com/users/barar-primer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/barar-primer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/barar-primer/subscriptions",
"organizations_url": "https://api.github.com/users/barar-primer/orgs",
"repos_url": "https://api.github.com/users/barar-primer/repos",
"events_url": "https://api.github.com/users/barar-primer/events{/privacy}",
"received_events_url": "https://api.github.com/users/barar-primer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17238). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,652
| 1,655
| 1,655
|
NONE
| null |
# What does this PR do?
Currently, `TokenClassificationPipeline` checks an attribute called `continuing_subword_prefix` in order to determine whether it can use the "correct" token aggregation strategy. Otherwise, it uses a backup heuristic which doesn't work well usually. This check works for BERT, but not for XLNet and RoBERTa:
```
>>> from transformers import AutoTokenizer
>>> bert_tk = AutoTokenizer.from_pretrained("bert-base-cased")
>>> print(getattr(tk._tokenizer.model, "continuing_subword_prefix", None))
##
>>> xlnet_tk = AutoTokenizer.from_pretrained("xlnet-base-cased")
>>> print(getattr(xlnet_tk._tokenizer.model, "continuing_subword_prefix", None))
None
```
However, there is a better way. The fast tokenizers for XLNet and RoBERTa are word-aware and provide a `word_ids` method. This PR updates the pipeline to pass around the `word_ids` list until it is used in `gather_pre_entities`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @Narsil I'd appreciate a draft review and then I'll update tests if this looks good.
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17238/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17238",
"html_url": "https://github.com/huggingface/transformers/pull/17238",
"diff_url": "https://github.com/huggingface/transformers/pull/17238.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17238.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17237
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17237/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17237/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17237/events
|
https://github.com/huggingface/transformers/pull/17237
| 1,235,357,074
|
PR_kwDOCUB6oc43yqqs
| 17,237
|
Align logits and labels in OPT
|
{
"login": "MichelBartels",
"id": 17650521,
"node_id": "MDQ6VXNlcjE3NjUwNTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/17650521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichelBartels",
"html_url": "https://github.com/MichelBartels",
"followers_url": "https://api.github.com/users/MichelBartels/followers",
"following_url": "https://api.github.com/users/MichelBartels/following{/other_user}",
"gists_url": "https://api.github.com/users/MichelBartels/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichelBartels/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichelBartels/subscriptions",
"organizations_url": "https://api.github.com/users/MichelBartels/orgs",
"repos_url": "https://api.github.com/users/MichelBartels/repos",
"events_url": "https://api.github.com/users/MichelBartels/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichelBartels/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"LGTM - this is because we took the model from BART which is an encoder decoder\r\nwdyt @ArthurZucker @patrickvonplaten ?",
"Oh, good find @MichelBartels! We weren't careful enough when reviewing the PR here - we should have aligned this with GPT2 right away.\r\n\r\nThe problem is that, it's not really wrong to **not** shift the labels and people could have already written their training pipelines with OPT where the labels are shifted before being passed to the model. So this could be backwards breaking here. I however do think it's important to align OPT as much as possible with GPT2. \r\n\r\n@LysandreJik @sgugger - do you think we could fix this in a patch release? ",
"Yes it needs to be addressed ASAP to avoid breaking changes, a patch release today is fine by me.",
"Yes, agreed! Let's merge this PR as-is, check if there are any other issues and do a patch release in a couple of hours."
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
For other decoder models, the labels are shifted and the last logit of each sequence is removed so they align when computing the loss. This isn't done for OPT. This PR adds this feature.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17237/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17237",
"html_url": "https://github.com/huggingface/transformers/pull/17237",
"diff_url": "https://github.com/huggingface/transformers/pull/17237.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17237.patch",
"merged_at": 1652708259000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17236
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17236/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17236/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17236/events
|
https://github.com/huggingface/transformers/issues/17236
| 1,235,352,752
|
I_kwDOCUB6oc5Jofyw
| 17,236
|
[Longformer] Issues with "is_index_masked" when using single encoder layer
|
{
"login": "NVukobrat",
"id": 44512290,
"node_id": "MDQ6VXNlcjQ0NTEyMjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/44512290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NVukobrat",
"html_url": "https://github.com/NVukobrat",
"followers_url": "https://api.github.com/users/NVukobrat/followers",
"following_url": "https://api.github.com/users/NVukobrat/following{/other_user}",
"gists_url": "https://api.github.com/users/NVukobrat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NVukobrat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NVukobrat/subscriptions",
"organizations_url": "https://api.github.com/users/NVukobrat/orgs",
"repos_url": "https://api.github.com/users/NVukobrat/repos",
"events_url": "https://api.github.com/users/NVukobrat/events{/privacy}",
"received_events_url": "https://api.github.com/users/NVukobrat/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey @ydshieh, do you have any news/comments on this issue?",
"Hi @NVukobrat \r\n\r\nAfter looking more closely, here is my thought:\r\n\r\nWhat you suggests could be achieved by adding \r\n```\r\n is_index_masked = attention_mask < 0 \r\n is_index_global_attn = attention_mask > 0 \r\n is_global_attn = is_index_global_attn.flatten().any().item() \r\n```\r\nto `LongformerSelfAttention`. However, the input `attention_mask` is not as simple as `1 or 0` anymore, and the way you prepare it (`attention_mask = torch.ones`) as input to `LongformerLayer` is incorrectly. See the details below.\r\n\r\nI have to discuss with the team members about the design, but so far my personal understanding is that we encourage the users to interact with the models at the `Model` level (for example `LongformerModel`) instead of the intermediate `layer`s - to avoid these kinds of incorrect inputs.\r\n\r\n(Of course, the code is in open source, and the users could customize it if there is a real necessity - but they should be careful and responsible for the inputs) \r\n\r\nAgain, **let me have a discussion with the team members** first and come back to this thread.\r\n\r\n### More details here:\r\n\r\n- The (base model) `LongformerModel` receives `attention_mask` which is what we are familiar with:\r\n - It is usually prepared by a tokenizer\r\n - If not provided, we use `attention_mask = torch.ones(...)`\r\n\r\n- However, `attention_mask` will be processed in `LongformerModel`\r\n - dealing with global attention:\r\nhttps://github.com/huggingface/transformers/blob/b9bb417324c0d9013c505dc39c016ab9ca0e23c8/src/transformers/models/longformer/modeling_longformer.py#L1692-L1694\r\n - padding to window size:\r\nhttps://github.com/huggingface/transformers/blob/b9bb417324c0d9013c505dc39c016ab9ca0e23c8/src/transformers/models/longformer/modeling_longformer.py#L1696-L1703\r\n - change to additive attention mask (changing shape and using `-10000.0`):\r\nhttps://github.com/huggingface/transformers/blob/b9bb417324c0d9013c505dc39c016ab9ca0e23c8/src/transformers/models/longformer/modeling_longformer.py#L1705-L1709\r\n\r\n- In `LongformerEncoder`, the following are computed from the processed `attention_mask`\r\nhttps://github.com/huggingface/transformers/blob/b9bb417324c0d9013c505dc39c016ab9ca0e23c8/src/transformers/models/longformer/modeling_longformer.py#L1263-L1265\r\n\r\n- The inputs to `LongformerLayer`, `LongformerAttention` and `LongformerSelfAttention` are the processed version shown above",
"Hey @NVukobrat,\r\n\r\nCouldn't you just prepare the following args:\r\n\r\n`is_index_masked=None`\r\n`is_index_global_attn=None`\r\n`is_global_attn=None`\r\n\r\nbefore you pass your inputs to the LongformerSelfAttentionLayer. \r\n\r\nNote that `LongformerSelfAttentionLayer` is a non-public class which is subject to breaking changes,\r\nso we don't recommend directly importing it. If you do however, I think it also shouldn't be too difficult to create the necessary inputs before calling it no? ",
"Hey @ydshieh @patrickvonplaten, thanks a lot for providing the details! Very informative and helpful for our use case!\r\n\r\nSelecting and setting mentioned attributes (`attention_mask`, `layer_head_mask`, `is_index_masked`, `is_index_global_attn`, and `is_global_attn`) before passing activations to the `LongformerSelfAttentionLayer` works for us. \r\n\r\nThanks once again for your help!"
] | 1,652
| 1,653
| 1,653
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.7
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Just run the code attached bellow
### Sample script:
```python
import torch
from transformers import LongformerModel
model = LongformerModel.from_pretrained(
"allenai/longformer-base-4096", torchscript=True
)
submodel = model.encoder.layer[0]
input_shape = (1, 512, 768)
activations = torch.rand(input_shape)
attention_mask = torch.ones((1, 512), dtype=torch.long)
results = submodel(activations, attention_mask=attention_mask)
```
### Traceback
```
Traceback (most recent call last):
File "/Users/nvukobrat/miniconda3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/nvukobrat/miniconda3/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/nvukobrat/.vscode/extensions/ms-python.python-2022.6.2/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module>
cli.main()
File "/Users/nvukobrat/.vscode/extensions/ms-python.python-2022.6.2/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 444, in main
run()
File "/Users/nvukobrat/.vscode/extensions/ms-python.python-2022.6.2/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 285, in run_file
runpy.run_path(target_as_str, run_name=compat.force_str("__main__"))
File "/Users/nvukobrat/miniconda3/lib/python3.9/runpy.py", line 268, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/nvukobrat/miniconda3/lib/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/nvukobrat/miniconda3/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/nvukobrat/Desktop/Python/pytorch_longformer_huggingface_bug.py", line 15, in <module>
results = submodel(activations, attention_mask=attention_mask)
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1209, in forward
self_attn_outputs = self.attention(
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1145, in forward
self_outputs = self.self(
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 642, in forward
attn_probs = torch.masked_fill(attn_probs, is_index_masked[:, :, None, None], 0.0)
TypeError: 'NoneType' object is not subscriptable
```
### Proposed solution
The issue here is that the `is_index_masked` isn't populated when a single encoder layer is extracted (problem occurs in the Longformer self-attention layer). The proposed solution could be to check and populate `is_index_masked` dynamically.
File: `transformers/models/longformer/modeling_longformer.py`
Class : `LongformerSelfAttention`
Function: `forward`
Code:
```python
# rest of the code...
if layer_head_mask is not None:
assert layer_head_mask.size() == (
self.num_heads,
), f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}"
attn_probs = layer_head_mask.view(1, 1, -1, 1) * attn_probs
# softmax sometimes inserts NaN if all positions are masked, replace them with 0
# Proposed fix
if is_index_masked is None:
is_index_masked = attention_mask < 0
attn_probs = torch.masked_fill(attn_probs, is_index_masked[:, :, None, None], 0.0)
attn_probs = attn_probs.type_as(attn_scores)
# rest of the code...
```
### Expected behavior
```shell
I would expect to be able to get single encoder layer outputs once I run the provided script.
Let me know if this fix is valid. If yes, I can open a pull request if needed.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17236/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17235
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17235/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17235/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17235/events
|
https://github.com/huggingface/transformers/pull/17235
| 1,235,336,077
|
PR_kwDOCUB6oc43ymUy
| 17,235
|
fix --gpus option for docker
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
Some multi-GPUs tests in scheduled CI workflow file use `--gpus 0`. I think it is an error, and we might test with only 1 GPU for those multi GPUs tests.
This PR fixes it.
**Remark**: It's quite strange that, from the setup job log, we can see 2 GPUs in `nvidia-smi` while we have `options: --gpus 0`. I am not 100% sure if this PR has real value, but at least it avoids some confusing maybe.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17235/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17235",
"html_url": "https://github.com/huggingface/transformers/pull/17235",
"diff_url": "https://github.com/huggingface/transformers/pull/17235.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17235.patch",
"merged_at": 1652455586000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17234
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17234/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17234/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17234/events
|
https://github.com/huggingface/transformers/issues/17234
| 1,235,269,592
|
I_kwDOCUB6oc5JoLfY
| 17,234
|
add FAN model (vision)
|
{
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi there, I would like to implement this model @NielsRogge ",
"Hi @NielsRogge.\r\nTo my knowledge there would still be two pending tasks\r\n- [X] Update README.md to include FAN model\r\n- [ ] Migrate Files, weights to NVIDA organization space\r\nPlease let me know what additional tasks you think might be pending"
] | 1,652
| 1,669
| null |
CONTRIBUTOR
| null |
### Model description
Fully attentional networks (FAN) is a family of general-purpose Vision Transformer backbones that are highly robust to unseen natural corruptions in various visual recognition tasks.
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
* https://github.com/NVlabs/FAN
* https://arxiv.org/abs/2204.12451
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17234/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17234/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/17233
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17233/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17233/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17233/events
|
https://github.com/huggingface/transformers/issues/17233
| 1,235,264,279
|
I_kwDOCUB6oc5JoKMX
| 17,233
|
bug in modeling_tf_wav2vec2
|
{
"login": "ahmedlone127",
"id": 66001253,
"node_id": "MDQ6VXNlcjY2MDAxMjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/66001253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmedlone127",
"html_url": "https://github.com/ahmedlone127",
"followers_url": "https://api.github.com/users/ahmedlone127/followers",
"following_url": "https://api.github.com/users/ahmedlone127/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmedlone127/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmedlone127/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmedlone127/subscriptions",
"organizations_url": "https://api.github.com/users/ahmedlone127/orgs",
"repos_url": "https://api.github.com/users/ahmedlone127/repos",
"events_url": "https://api.github.com/users/ahmedlone127/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmedlone127/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @ahmedlone127 ๐ The error appears because that line does not run without Eager Execution (see below), which is the case for your script. This is a problem on our side, and we will be fixing it ๐ \r\n\r\n\r\n",
"https://github.com/huggingface/transformers/issues/17285",
"Thanks a lot, hope this gets a fix soon :) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Following merging of #18153 the reproduction snippet runs on main without error. "
] | 1,652
| 1,658
| 1,658
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU? Yes): 1.11.0+cu113 (True)
- Tensorflow version (GPU? Yes): 2.8.0 (True)
- Flax version (GPU:Yes): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: I am running on colab therefore I think it's parallel
```
### Who can help?
@patrickvonplaten
@Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import os
from transformers import Wav2Vec2Processor, TFWav2Vec2ForCTC
import tensorflow as tf
import numpy as np
import torch
import json
from datasets import load_dataset
import soundfile as sf
import torch
Wav2vec2Model = "facebook/wav2vec2-base-960h"
Wav2vec2_EXPORT_PATH = f"/content/export_wav2vec2-base-960h"
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
input_values = processor(ds[0]["audio"]["array"], return_tensors="tf",
padding="longest",return_attention_mask=True).input_values # Batch size 1
class MyWav2vec2(TFWav2Vec2ForCTC):
@tf.function(
input_signature=[
{
"input_ids": tf.TensorSpec((None, None), tf.float32, name="serving1_input_ids"),
}
]
)
def serving1(self, inputs):
outputs = self.call(input_values=inputs["input_ids"])
return self.serving_output(outputs)
mywav2vec2 = MyWav2vec2.from_pretrained(Wav2vec2Model)
tf.saved_model.save(mywav2vec2, Wav2vec2_EXPORT_PATH, signatures={
"serving1": mywav2vec2.serving1,
})
```
### Error
```
TypeError Traceback (most recent call last)
<ipython-input-13-06d8d6c67672> in <module>()
1 jslwav2vec2 = JslWav2vec2.from_pretrained(Wav2vec2Model)
2 tf.saved_model.save(jslwav2vec2, Wav2vec2_EXPORT_PATH, signatures={
----> 3 "serving1": jslwav2vec2.serving1,
4 # "serving2": mygpt2.serving2
5 })
43 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in save(obj, export_dir, signatures, options)
1332 # pylint: enable=line-too-long
1333 metrics.IncrementWriteApi(_SAVE_V2_LABEL)
-> 1334 save_and_return_nodes(obj, export_dir, signatures, options)
1335 metrics.IncrementWrite(write_version="2")
1336
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in save_and_return_nodes(obj, export_dir, signatures, options, experimental_skip_checkpoint)
1367
1368 _, exported_graph, object_saver, asset_info, saved_nodes, node_paths = (
-> 1369 _build_meta_graph(obj, signatures, options, meta_graph_def))
1370 saved_model.saved_model_schema_version = (
1371 constants.SAVED_MODEL_SCHEMA_VERSION)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, signatures, options, meta_graph_def)
1534
1535 with save_context.save_context(options):
-> 1536 return _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
1480 signatures, wrapped_functions = (
1481 signature_serialization.canonicalize_signatures(signatures))
-> 1482 signature_serialization.validate_saveable_view(checkpoint_graph_view)
1483 signature_map = signature_serialization.create_signature_map(signatures)
1484 checkpoint_graph_view.set_signature(signature_map)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/signature_serialization.py in validate_saveable_view(saveable_view)
299 def validate_saveable_view(saveable_view):
300 """Performs signature-related sanity checks on `saveable_view`."""
--> 301 for name, dep in saveable_view.list_children(saveable_view.root):
302 if name == SIGNATURE_ATTRIBUTE_NAME:
303 if not isinstance(dep, _SignatureMap):
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in list_children(self, obj)
134 obj,
135 save_type=base.SaveType.SAVEDMODEL,
--> 136 cache=self._serialization_cache))
137 for name, child in self._children_cache[obj].items():
138 yield base.TrackableReference(name, child)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/graph_view.py in list_children(self, obj, save_type, **kwargs)
254 obj._maybe_initialize_trackable()
255 children = [base.TrackableReference(name, ref) for name, ref
--> 256 in obj._trackable_children(save_type, **kwargs).items()]
257 # pylint: enable=protected-access
258
/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py in _trackable_children(self, save_type, **kwargs)
1477 elif save_type == SaveType.SAVEDMODEL:
1478 cache = kwargs["cache"]
-> 1479 return self._get_legacy_saved_model_children(cache)
1480 else:
1481 raise ValueError("Unexpected format passed to `_trackable_children`. "
/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py in _get_legacy_saved_model_children(self, serialization_cache)
1488
1489 # Retrieve functions attached to the object.
-> 1490 functions = self._list_functions_for_serialization(serialization_cache)
1491
1492 # Trace concrete functions to force side-effects:
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py in _list_functions_for_serialization(self, serialization_cache)
3080 self.train_tf_function = None
3081 functions = super(
-> 3082 Model, self)._list_functions_for_serialization(serialization_cache)
3083 self.train_function = train_function
3084 self.test_function = test_function
/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py in _list_functions_for_serialization(self, serialization_cache)
3167 def _list_functions_for_serialization(self, serialization_cache):
3168 return (self._trackable_saved_model_saver
-> 3169 .list_functions_for_serialization(serialization_cache))
3170
3171 @property
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/base_serialization.py in list_functions_for_serialization(self, serialization_cache)
91 return {}
92
---> 93 fns = self.functions_to_serialize(serialization_cache)
94
95 # The parent AutoTrackable class saves all user-defined tf.functions, and
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/layer_serialization.py in functions_to_serialize(self, serialization_cache)
71 def functions_to_serialize(self, serialization_cache):
72 return (self._get_serialized_attributes(
---> 73 serialization_cache).functions_to_serialize)
74
75 def _get_serialized_attributes(self, serialization_cache):
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes(self, serialization_cache)
87
88 object_dict, function_dict = self._get_serialized_attributes_internal(
---> 89 serialization_cache)
90
91 serialized_attr.set_and_validate_objects(object_dict)
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/model_serialization.py in _get_serialized_attributes_internal(self, serialization_cache)
55 objects, functions = (
56 super(ModelSavedModelSaver, self)._get_serialized_attributes_internal(
---> 57 serialization_cache))
58 functions['_default_save_signature'] = default_signature
59 return objects, functions
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes_internal(self, serialization_cache)
96 """Returns dictionary of serialized attributes."""
97 objects = save_impl.wrap_layer_objects(self.obj, serialization_cache)
---> 98 functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)
99 # Attribute validator requires that the default save signature is added to
100 # function dict, even if the value is None.
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in wrap_layer_functions(layer, serialization_cache)
195 for fn in fns.values():
196 if fn is not None and not isinstance(fn, LayerCall):
--> 197 fn.get_concrete_function()
198
199 # Restore overwritten functions and losses
/usr/lib/python3.7/contextlib.py in __exit__(self, type, value, traceback)
117 if type is None:
118 try:
--> 119 next(self.gen)
120 except StopIteration:
121 return False
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in tracing_scope()
357 if training is not None:
358 with backend.deprecated_internal_learning_phase_scope(training):
--> 359 fn.get_concrete_function(*args, **kwargs)
360 else:
361 fn.get_concrete_function(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in get_concrete_function(self, *args, **kwargs)
1262 def get_concrete_function(self, *args, **kwargs):
1263 # Implements GenericFunction.get_concrete_function.
-> 1264 concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
1265 concrete._garbage_collector.release() # pylint: disable=protected-access
1266 return concrete
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
1254 # run the first trace but we should fail if variables are created.
1255 concrete = self._stateful_fn._get_concrete_function_garbage_collected( # pylint: disable=protected-access
-> 1256 *args, **kwargs)
1257 if self._created_variables:
1258 raise ValueError("Creating variables on a non-first call to a function"
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
3034 args, kwargs = None, None
3035 with self._lock:
-> 3036 graph_function, _ = self._maybe_define_function(args, kwargs)
3037 seen_names = set()
3038 captured = object_identity.ObjectIdentitySet(
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3290
3291 self._function_cache.add_call_context(cache_key.call_context)
-> 3292 graph_function = self._create_graph_function(args, kwargs)
3293 self._function_cache.add(cache_key, cache_key_deletion_observer,
3294 graph_function)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3138 arg_names=arg_names,
3139 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3140 capture_by_value=self._capture_by_value),
3141 self._function_attributes,
3142 function_spec=self.function_spec,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
1159 _, original_func = tf_decorator.unwrap(python_func)
1160
-> 1161 func_outputs = python_func(*func_args, **func_kwargs)
1162
1163 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
675 # the function a weak reference to itself to avoid a reference cycle.
676 with OptionalXlaContext(compile_with_xla):
--> 677 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
678 return out
679
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
570 with autocast_variable.enable_auto_cast_variables(
571 layer._compute_dtype_object): # pylint: disable=protected-access
--> 572 ret = method(*args, **kwargs)
573 _restore_layer_losses(original_losses)
574 return ret
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
168 return control_flow_util.smart_cond(
169 training, lambda: replace_training_and_call(True),
--> 170 lambda: replace_training_and_call(False))
171
172 # Create arg spec for decorated function. If 'training' is not defined in the
/usr/local/lib/python3.7/dist-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
105 return tf.__internal__.smart_cond.smart_cond(
--> 106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
107
108
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
51 if pred_value is not None:
52 if pred_value:
---> 53 return true_fn()
54 else:
55 return false_fn()
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in <lambda>()
167
168 return control_flow_util.smart_cond(
--> 169 training, lambda: replace_training_and_call(True),
170 lambda: replace_training_and_call(False))
171
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
164 def replace_training_and_call(training):
165 set_training_arg(training, training_arg_index, args, kwargs)
--> 166 return wrapped_call(*args, **kwargs)
167
168 return control_flow_util.smart_cond(
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in call(inputs, *args, **kwargs)
650 return layer.keras_api.__call__ # pylint: disable=protected-access
651 def call(inputs, *args, **kwargs):
--> 652 return call_and_return_conditional_losses(inputs, *args, **kwargs)[0]
653 return _create_call_fn_decorator(layer, call)
654
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in __call__(self, *args, **kwargs)
608 def __call__(self, *args, **kwargs):
609 self._maybe_trace(args, kwargs)
--> 610 return self.wrapped_call(*args, **kwargs)
611
612 def get_concrete_function(self, *args, **kwargs):
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/traceback_utils.py in error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
570 with autocast_variable.enable_auto_cast_variables(
571 layer._compute_dtype_object): # pylint: disable=protected-access
--> 572 ret = method(*args, **kwargs)
573 _restore_layer_losses(original_losses)
574 return ret
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
168 return control_flow_util.smart_cond(
169 training, lambda: replace_training_and_call(True),
--> 170 lambda: replace_training_and_call(False))
171
172 # Create arg spec for decorated function. If 'training' is not defined in the
/usr/local/lib/python3.7/dist-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
105 return tf.__internal__.smart_cond.smart_cond(
--> 106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
107
108
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in <lambda>()
167
168 return control_flow_util.smart_cond(
--> 169 training, lambda: replace_training_and_call(True),
170 lambda: replace_training_and_call(False))
171
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
164 def replace_training_and_call(training):
165 set_training_arg(training, training_arg_index, args, kwargs)
--> 166 return wrapped_call(*args, **kwargs)
167
168 return control_flow_util.smart_cond(
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in call_and_return_conditional_losses(*args, **kwargs)
632 def call_and_return_conditional_losses(*args, **kwargs):
633 """Returns layer (call_output, conditional losses) tuple."""
--> 634 call_output = layer_call(*args, **kwargs)
635 if version_utils.is_v1_layer_or_model(layer):
636 conditional_losses = layer.get_losses_for(
/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in call(self, input_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training, **kwargs)
1278 mask_time_indices = kwargs.get("mask_time_indices", None)
1279 if inputs["training"]:
-> 1280 hidden_states = self._mask_hidden_states(hidden_states, mask_time_indices=mask_time_indices)
1281
1282 encoder_outputs = self.encoder(
/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in _mask_hidden_states(self, hidden_states, mask_time_indices)
1212 mask_prob=self.config.mask_time_prob,
1213 mask_length=self.config.mask_time_length,
-> 1214 min_masks=2,
1215 )
1216 hidden_states = tf.where(
/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in _compute_mask_indices(shape, mask_prob, mask_length, min_masks)
264 print(tf.random.uniform((1,)))
265 print((mask_prob * sequence_length / mask_length + tf.random.uniform((1,)) )[0] )
--> 266 num_masked_spans = int(mask_prob * sequence_length / mask_length + tf.random.uniform((1,)))
267 num_masked_spans = max(num_masked_spans, min_masks)
268
TypeError: int() argument must be a string, a bytes-like object or a number, not 'Tensor'
```
### Expected behavior
```shell
I want to be able to export it to use it in tensorflow-serving
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17233/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17232
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17232/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17232/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17232/events
|
https://github.com/huggingface/transformers/pull/17232
| 1,235,176,457
|
PR_kwDOCUB6oc43yECV
| 17,232
|
Fix Flava FlavaForPreTrainingIntegrationTest test
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @apsdehal for information :-)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
Fix Flava CI failure
```
> self.assertAlmostEqual(outputs.loss_info.mmm_text.item(), 1.75533199)
E AssertionError: 1.7553329467773438 != 1.75533199 within 7 places (9.56777343796844e-07 difference)
```
Just change the argument `places` to `4`.
[Job run log](https://github.com/huggingface/transformers/runs/6416748323?check_suite_focus=true)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17232/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17232",
"html_url": "https://github.com/huggingface/transformers/pull/17232",
"diff_url": "https://github.com/huggingface/transformers/pull/17232.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17232.patch",
"merged_at": 1652728465000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17231
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17231/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17231/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17231/events
|
https://github.com/huggingface/transformers/pull/17231
| 1,235,163,700
|
PR_kwDOCUB6oc43yBSG
| 17,231
|
fix retribert's `test_torch_encode_plus_sent_to_model`
|
{
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh , this is a good question (which I also asked myself, I should have shared my thoughts on it). \r\n\r\nActually `self.bert_doc` is set to None in the model that is loaded. So, we can't test anything on `self.bert_doc` and `embed_answers` will be identical to `embed_questions` (and I remembered that we're testing the tokenizer not the model). But I follow what you think is best, i.e. if you think that it would be better to test the other 2 cases as a precaution if the checkpoint ever changes :hugs: ",
"> `self.bert_doc` is set to None\r\n\r\nIn this case, I think we can just keep what you have done so far in this PR, no need to change ๐ . Thanks for the info.",
"@SaulLu Guess we can merge this PR now :-) ? "
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR proposes a fix for the slow test `test_torch_encode_plus_sent_to_model` for RetriBert that failed in the CI daily that ran yesterday.
The error is due to the fact that RetriBert is not a classical model in the sense that the input can be fed to one or two encoders of the model. As a result `get_input_embeddings` is an unimplemented method and the forward method expects more arguments in input than what is outputed by the tokenizer.
I have therefore overridden the common test with a specific test for RetriBert that reflects these two specificities, which I will highlight them in a comment below.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed
## Local test
I've tested it by running:
```bash
RUN_SLOW=yes pytest tests/models/retribert/ -k test_torch_encode_plus_sent_to_model
```
output:
```
=========================================================================================== test session starts ============================================================================================
platform linux -- Python 3.9.12, pytest-7.1.1, pluggy-1.0.0
rootdir: /home/lucile_huggingface_co/repos/transformers, configfile: setup.cfg
plugins: dash-2.3.1, hypothesis-6.41.0, timeout-2.1.0, forked-1.4.0, xdist-2.5.0
collected 98 items / 97 deselected / 1 selected
tests/models/retribert/test_tokenization_retribert.py . [100%]
============================================================================== 1 passed, 97 deselected, 10 warnings in 5.38s ===============================================================================
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17231/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17231",
"html_url": "https://github.com/huggingface/transformers/pull/17231",
"diff_url": "https://github.com/huggingface/transformers/pull/17231.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17231.patch",
"merged_at": 1652790794000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17230
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17230/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17230/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17230/events
|
https://github.com/huggingface/transformers/issues/17230
| 1,235,076,094
|
I_kwDOCUB6oc5JncP-
| 17,230
|
Add RWKV2 (fast)
|
{
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"-- on second thoughts: it's not immediately clear to me how many people will use this particular model, or how it will perform. What I'd really like to do is implement and develop it on Hub, and see if it's useful/popular there. I spent an amount of time with the docs, and the route to adding new model architectures seems to preferentially support adding _directly_ to `transformers`. Tooling for new model architectures that worked on Hub (e.g. cookiecutter, class organisation, and tests) would be super neat. Is that something there's any interest in?",
"> -- on second thoughts: it's not immediately clear to me how many people will use this particular model, or how it will perform.\r\n\r\nTo answer your question: If it performs better than the other CausalLM models out there, it will most likely get used. Make a PR, build an initial version that can be run on HF, and see if any of the HF devs are willing to chime in. I am interested in this work, particularly because it solves a problem I haven't seen before: Be able to run CasualLM models on CPU. And my work stretches beyond the KoboldAI team, I know there are more out there that seem to benefit from the usage of CPU models because of the high prices that GPU models currently have.",
"Work is going OK. We're porting the GPT-like part to Transformers first, for training and induction, and will work out the fast RNN induction-only part after the GPT part passes tests. ",
"Where is your work at? I have worked on this model and would like to contribute. I'm also experienced now at troubleshooting the parts of this model (mostly inference accuracy though), and have spent time understanding the cuda kernels. I have some experience with adjusting new codebases to unexpected featureset combinations.",
"I'm also curious how this one is coming along. (I just saw the original paper today. Not sure how I missed it...)",
"@leondz are you guys still working on this? I am looking to get into this if this can work on edge devices",
"Some time ago I looked a little into continuing this, but other things came up.\r\nAfter that experience, I would recommend that future implementers start a new fork, rather than working off the existing one, because very little has been done, so it can take extra effort to learn the existing situation without much return.\r\nFor the record:\r\nleondz's branch is at https://github.com/leondz/transformers/tree/rwkv-v2 .\r\nI added smidges to it at https://github.com/xloem/transformers/tree/rwkv-v2 and https://github.com/xloem/transformers/tree/rwkv-v2-disable_non_clm_for_now .\r\n\r\nSince that work, RWKV is on version 4 now (although the changes between versions are not generally complex): https://github.com/BlinkDL/RWKV-LM",
"I can't understand why this hasn't seen wider adoption. It makes me a bit skeptical. If it's better in all ways compared to the original transformer paper why wouldn't we see adoption from Meta, OpenAI, DeepMind etc?",
"You could ask the same about any model or technology near the top of a leaderboard. Things happen because people do the work or make the business decisions behind them happening. There are scads and scads of things better than the original transformer paper, but they're not normative yet.",
"> I can't understand why this hasn't seen wider adoption. It makes me a bit skeptical. If it's better in all ways compared to the original transformer paper why wouldn't we see adoption from Meta, OpenAI, DeepMind etc?\r\n\r\nThis is better but GPT is good enough for most applications.\r\nI will just keep training larger models. RWKV 14B release soon. ",
"> I can't understand why this hasn't seen wider adoption. It makes me a bit skeptical. If it's better in all ways compared to the original transformer paper why wouldn't we see adoption from Meta, OpenAI, DeepMind etc?\r\n\r\nIt's not presented well and clearly, I am working on a fork or huggingface integration that answers questions, this is pretty much a breakthrough model imo, I am just making sure the runtimes are true. It still in R and D phase adoption phase comes soon after",
"I spent about a month working on this but the code wasn't stable and wasn't version controlled in the normal way, which made refactoring really tricky. Then time ran out. I think if the engineering side of things is fixed, and there's a stable release, it's a great model - definitely more data-efficient than competitors, which is really the core factor now.",
"> I can't understand why this hasn't seen wider adoption. It makes me a bit skeptical. If it's better in all ways compared to the original transformer paper why wouldn't we see adoption from Meta, OpenAI, DeepMind etc?\r\n\r\nFor our own project we have kind of basic support for it workarounded in with the original base, but the reason we don't finetune it or don't support it properly is because Huggingface support is missing and we are tightly integrated with huggingface. I assume other providers / projects have the same issue. For adoption I'd love to see RWKV land in huggingface so we can begin to offer it to our users the proper way, without them relying on manual steps, and without missing features for this model.",
"Yeah but why doesn't OpenAI literally just spend one month on this with 10\nguys and use this? It think this has some drawback but no one can tell me\nwhat it is... It's feel reasonable that all new papers from Google, OpenAI\nshould use this.\n\nDen ons 30 nov. 2022 18:55henk717 ***@***.***> skrev:\n\n> I can't understand why this hasn't seen wider adoption. It makes me a bit\n> skeptical. If it's better in all ways compared to the original transformer\n> paper why wouldn't we see adoption from Meta, OpenAI, DeepMind etc?\n>\n> For our own project we have kind of basic support for it workarounded in\n> with the original base, but the reason we don't finetune it or don't\n> support it properly is because Huggingface support is missing and we are\n> tightly integrated with huggingface. I assume other providers / projects\n> have the same issue. For adoption I'd love to see RWKV land in huggingface\n> so we can begin to offer it to our users the proper way, without them\n> relying on manual steps, and without missing features for this model.\n>\n> โ\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/17230#issuecomment-1332535414>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AHYLDTWSJQDOOINSE5GVFUDWK6IJZANCNFSM5V275BWA>\n> .\n> You are receiving this because you commented.Message ID:\n> ***@***.***>\n>\n",
"> Yeah but why doesn't OpenAI literally just spend one month on this with 10 guys and use this? It think this has some drawback but no one can tell me what it is... It's feel reasonable that all new papers from Google, OpenAI should use this. \r\n\r\nThere are a number of papers with similar \"exponential moving average\" design now. \r\n\r\nFor example, S4D is using slightly fancier kernels: https://github.com/HazyResearch/state-spaces (while I find simple kernels are enough).\r\n\r\nRWKV is weaker at LAMBADA (comparing with GPT) when the model is small (< 3B), but I find adding one single tiny QKV attention is enough to solve it (helps a small model to copy words in prompt).\r\n\r\nMoreover, it's reasonable to expect a competitive linear-time attention model, because when human novelists write very long stories the speed is consistent (except GRRM lol).",
"> \r\n\r\nI don't think this project is well known, theres a huge eco system based of just what works right now i.e T5 and GPT*x. For example percievers io, and percievers AR by deepmind seems to do something similar to get linear attention. To get this project to that level of popularity we have to build various production level proofs, most people already understand the challenges of T5 and GPT*x series. Second the models from a product perspective isn't as important, it's the data that is important. People are making the bets that its smarter to deploy a product with shitty AI and wait for the improvement before investing in the R and D. They build the product and make it easy to replace the AI portion of it in 10 minutes. These factors make it difficult to get projects and indepdent researchers to get the spotlight they need.",
"I understand. But this is the only architecture that has infinite context\nlength.\n\nDen tors 1 dec. 2022 17:01Michael Chung ***@***.***> skrev:\n\n> I don't think this project is well known, theres a huge eco system based\n> of just what works right now i.e T5 and GPT*x. For example percievers,\n> and percievers AR by deepmind seems to do something similar to get linear\n> attention. To get this project to that level of popularity we have to build\n> various production level proofs, most people already understand the\n> challenges of T5 and GPT*x series. Second the models from a product\n> perspective isn't as important, it's the data that is important. People are\n> making the bets that its smarter to deploy a product with shitty AI and\n> wait for the improvement before investing in the R and D. These factors\n> make it difficult to get projects and indepdent researchers to get the\n> spotlight they need.\n>\n> โ\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/17230#issuecomment-1333989472>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AHYLDTTICKMR7YJCRZTPKO3WLDDUTANCNFSM5V275BWA>\n> .\n> You are receiving this because you commented.Message ID:\n> ***@***.***>\n>\n",
"\"...this is the only architecture that has infinite context length.\"\r\n\r\nWait, really?... How did I miss that? I thought it was just a faster, more efficient approach.",
"\"So it's combining the best of RNN and transformer - great performance,\nfast inference, saves VRAM, fast training, \"infinite\" ctx_len, and free\nsentence embedding.\"\n\n> https://www.reddit.com/r/MachineLearning/comments/umq908/_/\n\nDen tors 1 dec. 2022 18:18jbm ***@***.***> skrev:\n\n> \"...this is the only architecture that has infinite context length.\"\n>\n> Wait, really?... How did I miss that? I thought it was just a faster, more\n> efficient approach?\n>\n> โ\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/17230#issuecomment-1334098970>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AHYLDTTLSQHYBSKLYA5BRX3WLDMYBANCNFSM5V275BWA>\n> .\n> You are receiving this because you commented.Message ID:\n> ***@***.***>\n>\n",
"The context length is presently limited by the accuracy of the floating point representation, due to the heavily simplified and unified architecture. RWKV is a strong combination of speed and long-context.",
"Right, okay. Well, that's pretty compelling, for sure...",
"> The context length is presently limited by the accuracy of the floating point representation, due to the heavily simplified and unified architecture. RWKV is a strong combination of speed and long-context.\r\n\r\nI think its also limited by the memory as well",
"There is no memory limit associated with context length that I am aware of with these models. State can be retained in a recurrent manner, providing for using only however much memory is available for accelerated parallel operation.",
"> There is no memory limit associated with context length that I am aware of with these models. State can be retained in a recurrent manner, providing for using only however much memory is available for accelerated parallel operation.\r\n\r\nSo you are telling me, that the `context` is effectively encoded into the state. I am reffering to the context length of the model consumes. I guess what you are trying to say is that because we have a state, the model can look into that state for any context size? as a result it has an infinite context length? I looked into the code and it says \r\n```\r\n T_MAX = 1024 # increase this if your ctx_len is long [NOTE: TAKES LOTS OF VRAM!]\r\n```\r\nso it appears to have a limit based off memory @BlinkDL can you clearify ? ",
"I should let Blink clarify, but regarding T_MAX: https://github.com/BlinkDL/RWKV-LM/blob/a268cd2e40351ee31c30c5f8a5d1266d35b41829/RWKV-v4neo/src/model.py#L34\r\n",
"Since the model support for this stalled, perhaps someone on HF's side such as @younesbelkada can help get this model supported?",
"> > There is no memory limit associated with context length that I am aware of with these models. State can be retained in a recurrent manner, providing for using only however much memory is available for accelerated parallel operation.\r\n> \r\n> So you are telling me, that the `context` is effectively encoded into the state. I am reffering to the context length of the model consumes. I guess what you are trying to say is that because we have a state, the model can look into that state for any context size? as a result it has an infinite context length? I looked into the code and it says\r\n> \r\n> ```\r\n> T_MAX = 1024 # increase this if your ctx_len is long [NOTE: TAKES LOTS OF VRAM!]\r\n> ```\r\n> \r\n> so it appears to have a limit based off memory @BlinkDL can you clearify ?\r\n\r\nI am not using the correct method to train it because I am lazy. But you can always finetune the model to support longer ctxlen. For example, fine-tuned to 4096 here:\r\n\r\nhttps://huggingface.co/BlinkDL/rwkv-4-pile-3b\r\n\r\nWith the correct training method, I estimate the effective ctx_len can at least be 100K.",
"So it doesn't have \"infinite\" ctx_len.\n\nDen lรถr 3 dec. 2022 06:26PENG Bo ***@***.***> skrev:\n\n> There is no memory limit associated with context length that I am aware of\n> with these models. State can be retained in a recurrent manner, providing\n> for using only however much memory is available for accelerated parallel\n> operation.\n>\n> So you are telling me, that the context is effectively encoded into the\n> state. I am reffering to the context length of the model consumes. I guess\n> what you are trying to say is that because we have a state, the model can\n> look into that state for any context size? as a result it has an infinite\n> context length? I looked into the code and it says\n>\n> T_MAX = 1024 # increase this if your ctx_len is long [NOTE: TAKES LOTS OF VRAM!]\n>\n> so it appears to have a limit based off memory @BlinkDL\n> <https://github.com/BlinkDL> can you clearify ?\n>\n> I am not using the correct method to train it because I am lazy.\n>\n> But you can always finetune the model to support longer ctxlen. For\n> example, fine-tuned to 4096 here:\n>\n> https://huggingface.co/BlinkDL/rwkv-4-pile-3b\n>\n> โ\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/17230#issuecomment-1336066067>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AHYLDTTHM2NCFZJFFG4JF63WLLKWTANCNFSM5V275BWA>\n> .\n> You are receiving this because you commented.Message ID:\n> ***@***.***>\n>\n",
"I suspect technically if you used a rational number representation rather than floating point it would have infinite context length.\r\n\r\nAside: Iโm not an ML researcher, but I donโt know why downscaling like this doesnโt get more attention. It seems context length could be fully infinite by re-encoding past information for what is helpful for future states, and a network wired to discover its own architecture would quickly find this.",
"> So it doesn't have \"infinite\" ctx_len. Den lรถr 3 dec. 2022 06:26PENG Bo ***@***.***> skrev:\r\n\r\nRNN has infinite ctx_len if you use correct training & inference method.\r\n\r\nI am just being lazy because when the model is small it can't even generate perfect result for 1024 ctxlen.\r\n\r\nSo I will improve it only after the 50B params model."
] | 1,652
| 1,683
| 1,683
|
CONTRIBUTOR
| null |
### Model description
I would like to implement a new model architecture.
## Short description
RWKV v2 is an "RNN with transformer-level performance, without using attention. Similar to Apple's Attention Free Transformer. All trained models open-source. Inference is very fast (even on CPUs) and might work on cell phones. There's also a GPT-type implementation." -- ([Hochreiter's description](https://twitter.com/HochreiterSepp/status/1524270961314484227))
RWKV v2 is parallelizable because the time-decay of each channel is data-independent (and trainable). For example, in usual RNN you can adjust the time-decay of a channel from say 0.8 to 0.5 (these are called "gates"), while in RWKV v2 you simply move the information from a W-0.8-channel to a W-0.5-channel to achieve the same effect. RWKV can leverage GPUs, but doesn't need to.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
## Implementation and weights
There's an implementation at [BlinkDL/RWKV-LM](https://github.com/BlinkDL/RWKV-LM) which also gives a detailed description of the model internals and some performance benchmarks. Model weights currently are being trained for a few datasets, including the Pile (see e.g. [BlinkDL/RWKV-v2-RNN-Pile](https://github.com/BlinkDL/RWKV-v2-RNN-Pile/)) and [Danish Gigaword](https://gigaword.dk) by me. Both will be openly available - some checkpoints for the Pile already are, even though it's an ongoing process.
## Status
The model seems quite exciting and I'm able to replicate preliminary results. I'm already talking with @BlinkDL about the implementation. I'm happy to implement/port the model architecture (for both RNN and GPT variants), tokenizer, and tests myself (and have already started) and would appreciate help and advice.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17230/reactions",
"total_count": 31,
"+1": 19,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 12
}
|
https://api.github.com/repos/huggingface/transformers/issues/17230/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17229
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17229/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17229/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17229/events
|
https://github.com/huggingface/transformers/pull/17229
| 1,235,062,194
|
PR_kwDOCUB6oc43xrc1
| 17,229
|
OPT-fix
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Ok let's merge this one first and then I'll rebase mine here: https://github.com/huggingface/transformers/pull/17228",
"@younesbelkada can you also replace:\r\n\r\n```\r\n tokenizer = GPT2Tokenizer.from_pretrained(\"patrickvonplaten/opt_gpt2_tokenizer\")\r\n```\r\n\r\nin the embedding test with \r\n\r\n```\r\n tokenizer = GPT2Tokenizer.from_pretrained(self.path_model)\r\n```",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, just a nit:\r\n\r\nThis line under `OPTGenerationTest`\r\n```\r\nmodel = OPTForCausalLM.from_pretrained(self.path_model)\r\n```\r\nhas `self.path_model` undefined.",
"And there are still 2 `patrickvonplaten/opt_gpt2_tokenizer` in the current version, but probably these are intended? I will leave this part for you ang Patrick."
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Quicklly fxing 3 testing issues!
cc @patrickvonplaten @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17229/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17229",
"html_url": "https://github.com/huggingface/transformers/pull/17229",
"diff_url": "https://github.com/huggingface/transformers/pull/17229.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17229.patch",
"merged_at": 1652447663000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17228
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17228/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17228/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17228/events
|
https://github.com/huggingface/transformers/pull/17228
| 1,235,032,529
|
PR_kwDOCUB6oc43xlJg
| 17,228
|
OPT - fix docstring and improve tests slighly
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Some final fixes for OPT that we forgot yesterday
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17228/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17228",
"html_url": "https://github.com/huggingface/transformers/pull/17228",
"diff_url": "https://github.com/huggingface/transformers/pull/17228.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17228.patch",
"merged_at": 1652447690000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17227
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17227/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17227/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17227/events
|
https://github.com/huggingface/transformers/pull/17227
| 1,234,871,364
|
PR_kwDOCUB6oc43xCvT
| 17,227
|
Adds support for OPT in Flax and TF.
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"> Thanks for adding those! Without the `TFOPTForCausalLM`, I don't see the point of adding the TF version of OPT since it can't really be used, so would either not add TF yet or make sure this model is added before merging the PR.\r\n\r\nYes I am not done yet! Sorry if I pinged you a bit early",
"@ArthurZucker let me know if the PR is ready for a review or you need help with the tests :-) ",
"> @ArthurZucker let me know if the PR is ready for a review or you need help with the tests :-)\r\n\r\nI just have 1 last test that behave strangely (its more about padding tokens and positional embedings) but the jax code will be ready for review tomorrow 12am. Then will work quickly on the tf code and the PR should be ready by the end of the week! ",
"FLAX code is pretty much done. The only test that I can't solve is the difference in output for the jited model generation! \r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17227). All of your documentation changes will be reflected on that endpoint.",
"@ArthurZucker could we add a test similar to this one: https://github.com/huggingface/transformers/pull/17359 to both Flax and TF? \r\n\r\n@Rocketknight1 @gante could you check the TF version here as well? ",
"@ArthurZucker,\r\n\r\nDo you think we could fix the PR (I think the PR history is a bit messed up). Also totally fine to close this PR and just open a new PR (move all the relevant files to a new PR) if the git correction is too difficult",
"> @ArthurZucker,\r\n> \r\n> \r\n> \r\n> Do you think we could fix the PR (I think the PR history is a bit messed up). Also totally fine to close this PR and just open a new PR (move all the relevant files to a new PR) if the git correction is too difficult\r\n\r\nHey, I think we can close it. \r\nWill create a new clean branch"
] | 1,652
| 1,653
| 1,653
|
COLLABORATOR
| null |
# What does this PR do?
Adds support for OPT in Flax and TF.
Also clean Pytorch code a bit.
## Who can review?
@LysandreJik, @patrickvonplaten, @patil-suraj, @sgugger
Sorry for the two pull requests in a row, pulled from main instead of rebasing and had the entire commit history. Created a new branch to clean a bit.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17227/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17227/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17227",
"html_url": "https://github.com/huggingface/transformers/pull/17227",
"diff_url": "https://github.com/huggingface/transformers/pull/17227.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17227.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17226
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17226/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17226/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17226/events
|
https://github.com/huggingface/transformers/pull/17226
| 1,234,811,733
|
PR_kwDOCUB6oc43w2Wu
| 17,226
|
Add support for Opt in tf and flax
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Closing for a new pull request where history is fixed"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
Adds support for OPT in Flax and TF.
Also clean Pytorch code a bit.
## Who can review?
@LysandreJik, @patrickvonplaten, @patil-suraj, @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17226/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17226",
"html_url": "https://github.com/huggingface/transformers/pull/17226",
"diff_url": "https://github.com/huggingface/transformers/pull/17226.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17226.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17225
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17225/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17225/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17225/events
|
https://github.com/huggingface/transformers/pull/17225
| 1,234,779,986
|
PR_kwDOCUB6oc43wvue
| 17,225
|
OPTForCausalLM lm_head input size should be config.word_embed_proj_dim
|
{
"login": "vfbd",
"id": 89268918,
"node_id": "MDQ6VXNlcjg5MjY4OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/89268918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vfbd",
"html_url": "https://github.com/vfbd",
"followers_url": "https://api.github.com/users/vfbd/followers",
"following_url": "https://api.github.com/users/vfbd/following{/other_user}",
"gists_url": "https://api.github.com/users/vfbd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vfbd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vfbd/subscriptions",
"organizations_url": "https://api.github.com/users/vfbd/orgs",
"repos_url": "https://api.github.com/users/vfbd/repos",
"events_url": "https://api.github.com/users/vfbd/events{/privacy}",
"received_events_url": "https://api.github.com/users/vfbd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi!\r\nThanks for pointing out the issue :)\r\n@patrickvonplaten @ArthurZucker Do we have tests with models that doesn't have the same `hidden_dim` and `word_embed_proj_dim` ? Wondering why the tests are still passing",
"For extra information, we noticed this on KoboldAI and our software automatically saves the model the first time it is downloaded to the cache. We do that to help our users store the model in the most suitable format for them for later offline use. So if all tests pass also check the model after it has been saved using huggingface transformers rather than the converters.",
"Regarding tests, we should probably add a fast test that randomly initializes a model with `word_embed_proj_dim` != `hidden_size`. Essentially, we could add a test like the following: https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/tests/models/opt/test_modeling_opt.py#L235\r\nOnly that we overwrite the `word_embed_proj_dim` variable with something != hidden_size before initializing a random model.\r\n@vfbd would cool if you could add a test for this - if you have no time that's totally fine as well and we could add a test afterwards",
"> Regarding tests, we should probably add a fast test that randomly initializes a model with `word_embed_proj_dim` != `hidden_size`. Essentially, we could add a test like the following:\r\n> \r\n> https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/tests/models/opt/test_modeling_opt.py#L235\r\n> \r\n> \r\n> Only that we overwrite the `word_embed_proj_dim` variable with something != hidden_size before initializing a random model.\r\n> @vfbd would cool if you could add a test for this - if you have no time that's totally fine as well and we could add a test afterwards\r\n\r\n\r\n\r\nI think I can take care of that in my FLAX PR, or should I rather create a new PR? ",
"> in\r\n\r\nA new PR would be great :-)",
"Any updates on this?",
"Thanks for the ping @mrseeker - this is good for merge IMO :-)"
] | 1,652
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The input size of `lm_head` in `OPTForCausalLM` should be `config.word_embed_proj_dim`, not `config.hidden_size`. This is because, like the comment above the changed line says, `lm_head.weight` is tied to `model.decoder.embed_tokens.weight`, so the input size of lm_head should be the output size of embed_tokens (which is `config.word_embed_proj_dim`) and vice-versa.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17225/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17225",
"html_url": "https://github.com/huggingface/transformers/pull/17225",
"diff_url": "https://github.com/huggingface/transformers/pull/17225.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17225.patch",
"merged_at": 1653333630000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17224
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17224/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17224/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17224/events
|
https://github.com/huggingface/transformers/issues/17224
| 1,234,647,123
|
I_kwDOCUB6oc5JlzhT
| 17,224
|
ALBEF: Align Before Fuse
|
{
"login": "ggoggam",
"id": 47265378,
"node_id": "MDQ6VXNlcjQ3MjY1Mzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/47265378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggoggam",
"html_url": "https://github.com/ggoggam",
"followers_url": "https://api.github.com/users/ggoggam/followers",
"following_url": "https://api.github.com/users/ggoggam/following{/other_user}",
"gists_url": "https://api.github.com/users/ggoggam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggoggam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggoggam/subscriptions",
"organizations_url": "https://api.github.com/users/ggoggam/orgs",
"repos_url": "https://api.github.com/users/ggoggam/repos",
"events_url": "https://api.github.com/users/ggoggam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggoggam/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"@jkgrad What is the state of this issue? If no one is working on this, I would like to implement it.",
"Hey @DanielFLevine, we'd love for you to try and contribute that model! \r\n\r\ncc @NielsRogge who can help out once he's back from leave :)",
"@LysandreJik @NielsRogge Great! I've already started looking over the authors' code. Will reach out with any questions.",
"Is there still interest for this?",
"Same question",
"@DanielFLevine-zz - any updates on the model port?"
] | 1,652
| 1,699
| null |
NONE
| null |
### Model description
Align Before Fuse (ALBEF) is a vision-language (VL) model that showed competitive results in numerous VL tasks such as image-text retrieval, visual question answering, visual entailment, and visual grounding.
The authors propose to use text encoder (BERT's first half layers) and image encoder (ViT) to create an aligned representation for respective modality before fusing them together with a multi-modal encoder (BERT's second half layers). The model is trained on multi-modal representation tasks and momentum distillation to achieve state-of-the-art results in VL tasks.
As multi-modal models are gaining more attention in academia/industry, I think ALBEF could be a nice addition to the transformers library.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
- There are an official implementation and pre-trained/fine-tuned weights by the authors at this [repo](https://github.com/salesforce/ALBEF)
- Link to the [paper](https://arxiv.org/abs/2107.07651)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17224/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/17223
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17223/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17223/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17223/events
|
https://github.com/huggingface/transformers/pull/17223
| 1,234,642,885
|
PR_kwDOCUB6oc43wTEu
| 17,223
|
Add type hints for ProphetNet (Pytorch)
|
{
"login": "jQuinRivero",
"id": 55513213,
"node_id": "MDQ6VXNlcjU1NTEzMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/55513213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jQuinRivero",
"html_url": "https://github.com/jQuinRivero",
"followers_url": "https://api.github.com/users/jQuinRivero/followers",
"following_url": "https://api.github.com/users/jQuinRivero/following{/other_user}",
"gists_url": "https://api.github.com/users/jQuinRivero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jQuinRivero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jQuinRivero/subscriptions",
"organizations_url": "https://api.github.com/users/jQuinRivero/orgs",
"repos_url": "https://api.github.com/users/jQuinRivero/repos",
"events_url": "https://api.github.com/users/jQuinRivero/events{/privacy}",
"received_events_url": "https://api.github.com/users/jQuinRivero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> This looks good to me now, thank you! Let me know when you're ready and I'll merge it.\r\n\r\nGo ahead! Thanks!"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
Adding type hints for forward methods in user-facing class for Pegasus model (PyTorch) as mentioned in #16059
@Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17223/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17223",
"html_url": "https://github.com/huggingface/transformers/pull/17223",
"diff_url": "https://github.com/huggingface/transformers/pull/17223.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17223.patch",
"merged_at": 1652876627000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17222
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17222/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17222/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17222/events
|
https://github.com/huggingface/transformers/issues/17222
| 1,234,576,122
|
I_kwDOCUB6oc5JliL6
| 17,222
|
(T5) tf.function wrapped model.generate() does not produce the same result as non-wrapped model.generate()
|
{
"login": "JEF1056",
"id": 22546776,
"node_id": "MDQ6VXNlcjIyNTQ2Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/22546776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JEF1056",
"html_url": "https://github.com/JEF1056",
"followers_url": "https://api.github.com/users/JEF1056/followers",
"following_url": "https://api.github.com/users/JEF1056/following{/other_user}",
"gists_url": "https://api.github.com/users/JEF1056/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JEF1056/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JEF1056/subscriptions",
"organizations_url": "https://api.github.com/users/JEF1056/orgs",
"repos_url": "https://api.github.com/users/JEF1056/repos",
"events_url": "https://api.github.com/users/JEF1056/events{/privacy}",
"received_events_url": "https://api.github.com/users/JEF1056/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@JEF1056 thank you for pointing it out ๐ I can reproduce the issue with transformers==4.19 and 4.18 (4.17 and older versions do not support the `tf.function` wrapper), with and without input padding.\r\n\r\nI will look into it and let you know of any findings :) In the recent past, we found numerical instabilities in some compiled functions on CPU, so it may be related.",
"Interesting. I've tried with a GPU and used the master branch of the repository as well and results are identical. Unfortunately, I don't have a lot of experience with XLA so i'm not sure if I can be of much help.\r\n\r\nThanks for picking up the issue though!",
"@JEF1056 upon further digging in debugging mode, I couldn't find any unexpected behavior. I did encounter the expected behavior, which explains the mismatch:\r\n- `tf.function` compiles and optimizes the graph, which rearranges FP32 operations. Rearranging FP32 operations leads to very minor numerical differences (see [here](https://stackoverflow.com/questions/48957828/floating-point-arithmetic-why-would-order-of-addition-matter) an explanation). We can see this in the encoder forward pass, which has differences in the order of 1e-6;\r\n- Generation is in essence a sequence of forward passes, where past hidden outputs are fed as inputs. What starts as a tiny difference quickly builds up into a larger difference, which at some point results in different tokens.\r\n\r\nThe reverse can also be observed: if we pick an input with a stronger signal or a more powerful model, we see smaller differences at a token level. Consider the following example, and try it out with `t5-small`, `t5-base`, and `t5-large`:\r\n\r\n```\r\nfrom transformers import TFT5ForConditionalGeneration, AutoTokenizer\r\nimport tensorflow as tf\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-large\")\r\nmodel = TFT5ForConditionalGeneration.from_pretrained(\"t5-large\")\r\n\r\ninput_ids = tokenizer(\"translate English to German: This is a very long sentence that is easy to translate because it has common words.\", padding='max_length', return_tensors=\"tf\").input_ids\r\n\r\n# This is normal model.generate()\r\noutputs_0 = model.generate(input_ids)\r\nprint(outputs_0)\r\nprint(tokenizer.batch_decode(outputs_0))\r\n\r\n# This is wrapped with tf.function()\r\nwrapped = tf.function(model.generate)\r\noutputs_1 = wrapped(input_ids)\r\nprint(outputs_1)\r\nprint(tokenizer.batch_decode(outputs_1))\r\n```\r\n\r\nThe outputs are not exactly the same, but they are sensible. In any case, `tf.function` + generation is something we are working at the moment, stay tuned for further updates (which may alleviate this issue) :D",
"Also -- @Rocketknight1 @patrickvonplaten this issue [`tf.function` resulting in different FP32 ops -> different generate results] is something I'm seeing constantly and, curiously, the `tf.function` generation outputs qualitatively worse text. I wonder if there is any way to mitigate this issue and/or if we should try to contact the TF team ๐ค ",
"@gante I see, and that does make sense. I presume that the issue occurs in T5 and not GPT2 becasue T5 is encoder-decoder, while GPT2 is decoder only?\r\n```\r\nfrom transformers import TFGPT2LMHeadModel, AutoTokenizer\r\nimport tensorflow as tf\r\nimport numpy as np\r\n\r\nmodel_name = 'gpt2'\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = TFGPT2LMHeadModel.from_pretrained(model_name)\r\n\r\ninput_ids = tokenizer(\"This is a sentence that \", return_tensors=\"tf\").input_ids\r\n\r\n# This is normal model.generate()\r\noutputs_0 = model.generate(input_ids, max_length=100)\r\nprint(outputs_0)\r\nprint(tokenizer.batch_decode(outputs_0))\r\n\r\n# This is wrapped with tf.function()\r\nwrapped = tf.function(lambda x: model.generate(x, max_length=100))\r\noutputs_1 = wrapped(input_ids)\r\nprint(outputs_1)\r\nprint(tokenizer.batch_decode(outputs_1))\r\n\r\nassert np.array_equal(outputs_0, outputs_1), \"Results are not equal.\"\r\n```\r\nThe above code works fine.\r\n\r\nFor context, I've been trying to convert t5 to tensorflowjs and though I have got it working before by creating my own generate function in tf.js, having huggingface's generate function directly as part of the savedmodel would really improve its speed.\r\nIn the current t5 tf.function generate() wrapper, XlaDynamicUpdateSlice and TensorListConcatV2 ops are used, which pobably won't ever be supported by tf.js, do you know if there is any way to implement tf.function without these?",
"@gante from my experience small numerical differences in the magnitude of 1e-6 should not lead to different tokens being generated (in flax they don't). Also the generate outputs of https://github.com/huggingface/transformers/issues/17222#issue-1234576122 look quite bad. Since we don't seem to have this issue in GPT2, could it be that something encoder-decoder specific is not done correctly in `tf.function` ? E.g. the `encoder_attention_mask` or the cache? @JEF1056 could you try running the code also with `use_cache=False` to see if we still get a difference ?\r\n\r\nTo me it looks like a bug still from our side",
"@patrickvonplaten\r\n\r\nWith use_cache:\r\n```\r\nfrom transformers import TFT5ForConditionalGeneration, AutoTokenizer\r\nimport tensorflow as tf\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-small\")\r\nmodel = TFT5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\n\r\ninput_ids = tokenizer(\"translate English to German: This is a very short sentence.\", padding='max_length', return_tensors=\"tf\").input_ids\r\n\r\n# This is normal model.generate()\r\noutputs_0 = model.generate(input_ids)\r\nprint(outputs_0)\r\nprint(tokenizer.batch_decode(outputs_0))\r\n\r\n# This is wrapped with tf.function()\r\nwrapped = tf.function(model.generate)\r\noutputs_1 = wrapped(input_ids)\r\nprint(outputs_1)\r\nprint(tokenizer.batch_decode(outputs_1))\r\n\r\nassert outputs_0 == outputs_1, \"Results are not equal.\"\r\n```\r\n```\r\ntf.Tensor([[ 0 644 229 236 1319 7755 49 20144 5 1]], shape=(1, 10), dtype=int32)\r\n['<pad> Das ist ein sehr kurzer Satz.</s>']\r\ntf.Tensor(\r\n[[ 0 644 229 236 1319 7755 5 1 0 0 0 0 0 0\r\n 0 0 0 0 0 0]], shape=(1, 20), dtype=int32)\r\n['<pad> Das ist ein sehr kurz.</s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>']\r\n```\r\n\r\nWithout use_cache:\r\n```\r\nfrom transformers import TFT5ForConditionalGeneration, AutoTokenizer\r\nimport tensorflow as tf\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-small\")\r\nmodel = TFT5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\n\r\ninput_ids = tokenizer(\"translate English to German: This is a very short sentence.\", padding='max_length', return_tensors=\"tf\").input_ids\r\n\r\n# This is normal model.generate()\r\noutputs_0 = model.generate(input_ids, use_cache=False)\r\nprint(outputs_0)\r\nprint(tokenizer.batch_decode(outputs_0))\r\n\r\n# This is wrapped with tf.function()\r\nwrapped = tf.function(lambda x: model.generate(x, use_cache=False))\r\noutputs_1 = wrapped(input_ids)\r\nprint(outputs_1)\r\nprint(tokenizer.batch_decode(outputs_1))\r\n\r\nassert outputs_0 == outputs_1, \"Results are not equal.\"\r\n```\r\n```\r\ntf.Tensor([[ 0 644 229 236 1319 7755 49 20144 5 1]], shape=(1, 10), dtype=int32)\r\n['<pad> Das ist ein sehr kurzer Satz.</s>']\r\ntf.Tensor(\r\n[[ 0 644 229 236 1319 7755 5 1 0 0 0 0 0 0\r\n 0 0 0 0 0 0]], shape=(1, 20), dtype=int32)\r\n['<pad> Das ist ein sehr kurz.</s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>']\r\n```\r\n\r\nResults appear to be the same.\r\nI would also like to bring up that after wrapping generate() in tf.function, it no longer appears to respect stop tokens, which should be something that can be fixed on huggingface's side.\r\n\r\nI think @gante 's point that small numerical differences magnified by generating an autoregressive fashion is correct, as results diverge wildly the longer the generated sentence is.",
"@JEF1056, \r\n\r\nNote that the model does generate the EOS token. It's expected to get padding in the end since shapes need to be static when using XLA. You can remove the `<pad>` tokens by doing:\r\n\r\n```py\r\nprint(tokenizer.batch_decode(outputs_1, skip_special_tokens=True))\r\n```\r\n\r\nStill interested in a more in-detail analysis where exactly the differences start to creep in and at what point they become very significant. XLA generation does not look good enough to me to not be a bug",
"I see, I didn't realize that XLA shapes need to be static (seems waste compute for longer sequences though). That might pose some difficulties in getting generate() to work with GPT2 (decoder models in general) since inputs can't be padded to length in that case.\r\n\r\n",
"For some additional context @JEF1056: we have already found (and mitigated) an XLA/non-XLA mismatch (see https://github.com/tensorflow/tensorflow/issues/55682), so we can't rule out conversion problems. Sadly, they are hard to detect, as it relies on numerical debugging with XLA.\r\n\r\nI'm going to continue developing XLA + generate, keeping this issue in the backlog -- there is a chance future changes fix the issue we are seeing, or that I stumble across the root cause naturally as I enable the use of `tf.function` on other models.\r\n\r\nThank you for raising the issue, and let us know if your see related problems ๐ ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This issue is fixed, XLA T5 should be working properly :)"
] | 1,652
| 1,659
| 1,659
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patrik @gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Run the code below:
```
from transformers import TFT5ForConditionalGeneration, AutoTokenizer
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFT5ForConditionalGeneration.from_pretrained("t5-small")
input_ids = tokenizer("translate English to German: I need to convert this into a savedmodel.", padding='max_length', return_tensors="tf").input_ids
# This is normal model.generate()
outputs_0 = model.generate(input_ids)
print(outputs_0)
print(tokenizer.batch_decode(outputs_0))
# This is wrapped with tf.function()
wrapped = tf.function(model.generate)
outputs_1 = wrapped(input_ids)
print(outputs_1)
print(tokenizer.batch_decode(outputs_1))
assert outputs_0 == outputs_1, "Results are not equal."
```
2. Observe results. This is what i get on my machine:
```
tf.Tensor(
[[ 0 1674 2171 67 7 16 236 20819 15 7 8731 561
18980 29 5 1]], shape=(1, 16), dtype=int32)
['<pad> Ich muss dies in ein gespeichertes Modell umwandeln.</s>']
tf.Tensor(
[[ 0 1674 2171 67 7 16 236 20819 15 7 7 7
7 7 7 7 7 7 7 7]], shape=(1, 20), dtype=int32)
['<pad> Ich muss dies in ein gespeichertesssssssssss']
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
[<ipython-input-74-27b3c9e72d47>](https://localhost:8080/#) in <module>()
18 print(tokenizer.batch_decode(outputs_1))
19
---> 20 assert outputs_0 == outputs_1, "Results are not equal."
AssertionError: Results are not equal.
```
This issue gets worse the larger the max_length option in model.generate() gets.
### Expected behavior
```shell
The results are expected to be equal.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17222/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17221
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17221/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17221/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17221/events
|
https://github.com/huggingface/transformers/pull/17221
| 1,234,575,519
|
PR_kwDOCUB6oc43wFWb
| 17,221
|
Use word index for determining whether a token is a subword when addiโฆ
|
{
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17221). All of your documentation changes will be reflected on that endpoint.",
"@NielsRogge do you have suggestions for making a test case for the fast tokenizer that will split the \"๊ทธ\" token?\r\n\r\nCould I use the pretrained Layoutlmv2 tokenizer and mark the test as slow, or do you have suggestions for how to properly configure a test tokenizer that will split that token?\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,652
| 1,656
| 1,656
|
NONE
| null |
โฆng word labels.
# What does this PR do?
Use repeated word index instead of offset[0] == 0 for assigning pad labels to non-first subwords.
As described in the #17220, words like ๊ทธ will be split into 2 subwords both with 0,1 offsets but with the same word index. This PR ensures that this second subword receives the correct -100 label
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17220
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
I tried to write a test but the test tokenizer doesn't split my test word. Open to suggestions or maybe it can be a slow test with the pretrained tokenizer.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17221/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17221",
"html_url": "https://github.com/huggingface/transformers/pull/17221",
"diff_url": "https://github.com/huggingface/transformers/pull/17221.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17221.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17220
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17220/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17220/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17220/events
|
https://github.com/huggingface/transformers/issues/17220
| 1,234,542,170
|
I_kwDOCUB6oc5JlZ5a
| 17,220
|
LayoutLMv2 Fast Tokenizer improperly aligns labels for non-first subwords with 0 offsets
|
{
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"https://github.com/huggingface/transformers/blob/3f936df66287f557c6528912a9a68d7850913b9b/src/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py#L599\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,652
| 1,655
| 1,655
|
NONE
| null |
### System Info
```shell
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.16.0
- Platform: macOS-12.3.1-x86_64-i386-64bit
- Python version: 3.8.9
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
In [1]: from transformers import LayoutLMv2TokenizerFast
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
In [2]: tokenizer = LayoutLMv2TokenizerFast.from_pretrained("microsoft/layoutlmv2-base-uncased")
In [4]: toks = tokenizer(['๊ทธ'], boxes=[[1,2,3,4]], word_labels=[2])
In [5]: toks
Out[5]: {'input_ids': [101, 1455, 30017, 102], 'token_type_ids': [0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1], 'bbox': [[0, 0, 0, 0], [1, 2, 3, 4], [1, 2, 3, 4], [1000, 1000, 1000, 1000]], 'labels': [-100, 2, 2, -100]}
In [9]: toks.labels
Out[9]: [-100, 2, 2, -100]
```
### Expected behavior
```shell
Since the single word '๊ทธ' was split into 2 tokens, the second subword should be assigned a -100 value.
This is happening because of logic here:
https://github.com/huggingface/transformers/blob/3f936df66287f557c6528912a9a68d7850913b9b/src/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py#L599
where a token's offset is used to determine whether it is the first subword or not.
The problem with '๊ทธ' is that it is split into 2 subword: 'แ', '##แ
ณ' that share an offset of (0,1) so both given the label of the word.
By using solely the word_index to decide which is a subword this problem can be avoided.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17220/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17219
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17219/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17219/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17219/events
|
https://github.com/huggingface/transformers/pull/17219
| 1,234,502,256
|
PR_kwDOCUB6oc43v1n4
| 17,219
|
Updated checkpoint support for Sagemaker Model Parallel
|
{
"login": "cavdard",
"id": 44590949,
"node_id": "MDQ6VXNlcjQ0NTkwOTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/44590949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cavdard",
"html_url": "https://github.com/cavdard",
"followers_url": "https://api.github.com/users/cavdard/followers",
"following_url": "https://api.github.com/users/cavdard/following{/other_user}",
"gists_url": "https://api.github.com/users/cavdard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cavdard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cavdard/subscriptions",
"organizations_url": "https://api.github.com/users/cavdard/orgs",
"repos_url": "https://api.github.com/users/cavdard/repos",
"events_url": "https://api.github.com/users/cavdard/events{/privacy}",
"received_events_url": "https://api.github.com/users/cavdard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks a lot for your PR! I've left some refactoring suggestions to avoid duplicating code.\r\n\r\nThank you very much for reviewing. I updated the PR. ",
"Updated based on you suggestions."
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR updates SMP checkpoint support. With these changes SMP optimizer state checkpoints will be saved partially while SMP model weights will be saved in full. Since weights are saved in full, checkpoint behavior will be compatible with `save_pretrained` and `shard_checkpoint`.
- Uses `local_state_dict()` with partial optimizer state saving.
- Uses `smp.save` optimizer state saving for SMP.
- Uses `smp.load `when loading optimizer state saving for SMP.
- Reorders weight loading to happen after wrapping of model for SMP.
- Updated checks for the existence of optimizer checkpoint files since smp partial checkpoints contain postfixes in addition to filename(example: `filename_0_0` or `filename_0_0_0`).
- adds `load_best_model_at_end` support for SMP
This PR is created based on the feedback from [previous PR on partial checkpoint support for SMP:](https://github.com/huggingface/transformers/pull/16950)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17219/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17219",
"html_url": "https://github.com/huggingface/transformers/pull/17219",
"diff_url": "https://github.com/huggingface/transformers/pull/17219.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17219.patch",
"merged_at": 1652703446000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17218
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17218/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17218/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17218/events
|
https://github.com/huggingface/transformers/pull/17218
| 1,234,490,728
|
PR_kwDOCUB6oc43vzMd
| 17,218
|
Handle copyright in add-new-model-like
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
This makes sure the copyright is switched to the current year when a user uses `transformers-cli add-new-model-like`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17218/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17218",
"html_url": "https://github.com/huggingface/transformers/pull/17218",
"diff_url": "https://github.com/huggingface/transformers/pull/17218.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17218.patch",
"merged_at": 1652456839000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17217
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17217/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17217/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17217/events
|
https://github.com/huggingface/transformers/pull/17217
| 1,234,383,989
|
PR_kwDOCUB6oc43vcPw
| 17,217
|
Black preview
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
This PR switches `make style`, `make fixup` and `make quality` to use `black --preview`, which reformats the docstrings (also done by `hf-doc-styler` so nothing new here) as well as all error/logger/warning strings to respect the char limit.
This will avoid the annoying comments from the nefarious sgugger on PRs.
Note: for the preview feature there are differences between minor versions, so needed to update the setup a bit.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17217/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17217",
"html_url": "https://github.com/huggingface/transformers/pull/17217",
"diff_url": "https://github.com/huggingface/transformers/pull/17217.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17217.patch",
"merged_at": 1652387156000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17216
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17216/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17216/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17216/events
|
https://github.com/huggingface/transformers/pull/17216
| 1,234,255,283
|
PR_kwDOCUB6oc43vBAT
| 17,216
|
Fixed incorrect error message on missing weight file.
|
{
"login": "123jimin",
"id": 835369,
"node_id": "MDQ6VXNlcjgzNTM2OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/835369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/123jimin",
"html_url": "https://github.com/123jimin",
"followers_url": "https://api.github.com/users/123jimin/followers",
"following_url": "https://api.github.com/users/123jimin/following{/other_user}",
"gists_url": "https://api.github.com/users/123jimin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/123jimin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/123jimin/subscriptions",
"organizations_url": "https://api.github.com/users/123jimin/orgs",
"repos_url": "https://api.github.com/users/123jimin/repos",
"events_url": "https://api.github.com/users/123jimin/events{/privacy}",
"received_events_url": "https://api.github.com/users/123jimin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
I just started using Hugging Face Transformers for the first time, and encountered this error.
OSError: Error no file named pytorch_model.bin found in directory (...) but there is a file for Flax weights. Use `from_flax=True` to load this model from those weights.
Indeed, I forgot to download `pytorch_model.bin`, but the model I tried to use was not using Flax, so I dug a little bit to see which file was the library looking for.
For me it seems that there was a simple mistake...
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17216/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17216",
"html_url": "https://github.com/huggingface/transformers/pull/17216",
"diff_url": "https://github.com/huggingface/transformers/pull/17216.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17216.patch",
"merged_at": 1654079060000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17215
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17215/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17215/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17215/events
|
https://github.com/huggingface/transformers/issues/17215
| 1,234,040,283
|
I_kwDOCUB6oc5JjfXb
| 17,215
|
-1e9 constants in T5 implementation
|
{
"login": "marhlder",
"id": 2690031,
"node_id": "MDQ6VXNlcjI2OTAwMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2690031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marhlder",
"html_url": "https://github.com/marhlder",
"followers_url": "https://api.github.com/users/marhlder/followers",
"following_url": "https://api.github.com/users/marhlder/following{/other_user}",
"gists_url": "https://api.github.com/users/marhlder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marhlder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marhlder/subscriptions",
"organizations_url": "https://api.github.com/users/marhlder/orgs",
"repos_url": "https://api.github.com/users/marhlder/repos",
"events_url": "https://api.github.com/users/marhlder/events{/privacy}",
"received_events_url": "https://api.github.com/users/marhlder/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Probably of interest to @ydshieh "
] | 1,652
| 1,656
| 1,656
|
NONE
| null |
These -1e9 constants are too large for fp16 training
https://github.com/huggingface/transformers/blob/df735d1317994e366ab0edff6c55930e18912b7c/src/transformers/models/t5/modeling_tf_t5.py#L728
https://github.com/huggingface/transformers/blob/df735d1317994e366ab0edff6c55930e18912b7c/src/transformers/models/t5/modeling_tf_t5.py#L746
Maybe they should be made configurable?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17215/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17214
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17214/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17214/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17214/events
|
https://github.com/huggingface/transformers/issues/17214
| 1,234,014,812
|
I_kwDOCUB6oc5JjZJc
| 17,214
|
Bug of the text-classification in examples
|
{
"login": "EternalEep",
"id": 20923120,
"node_id": "MDQ6VXNlcjIwOTIzMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/20923120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EternalEep",
"html_url": "https://github.com/EternalEep",
"followers_url": "https://api.github.com/users/EternalEep/followers",
"following_url": "https://api.github.com/users/EternalEep/following{/other_user}",
"gists_url": "https://api.github.com/users/EternalEep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EternalEep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EternalEep/subscriptions",
"organizations_url": "https://api.github.com/users/EternalEep/orgs",
"repos_url": "https://api.github.com/users/EternalEep/repos",
"events_url": "https://api.github.com/users/EternalEep/events{/privacy}",
"received_events_url": "https://api.github.com/users/EternalEep/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
When I finetuned the text classification model based on the glue no trainer script, I found a bug in our script.
The URL is below:
https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py#L525
When we use the accelerator for multi-GPU training, the code should transfer from
if step == len(eval_dataloader)
to
if step == len(eval_dataloader) -1
Otherwise, it cannot work to filter the last step duplicated samples.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
just run the script with a text classification using multi-GPU accelerator. The problem occurs in the last step for duplicated samples.
### Expected behavior
```shell
I think it should be fixed soon.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17214/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17213
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17213/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17213/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17213/events
|
https://github.com/huggingface/transformers/pull/17213
| 1,233,993,311
|
PR_kwDOCUB6oc43uI5c
| 17,213
|
Add support for Perceiver ONNX export
|
{
"login": "deutschmn",
"id": 37573274,
"node_id": "MDQ6VXNlcjM3NTczMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37573274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deutschmn",
"html_url": "https://github.com/deutschmn",
"followers_url": "https://api.github.com/users/deutschmn/followers",
"following_url": "https://api.github.com/users/deutschmn/following{/other_user}",
"gists_url": "https://api.github.com/users/deutschmn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deutschmn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deutschmn/subscriptions",
"organizations_url": "https://api.github.com/users/deutschmn/orgs",
"repos_url": "https://api.github.com/users/deutschmn/repos",
"events_url": "https://api.github.com/users/deutschmn/events{/privacy}",
"received_events_url": "https://api.github.com/users/deutschmn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, @deutschmn this looks like a good start! :tada: \r\n\r\nMaybe the `preprocessor` check needs to be updated to fit other models' requirements."
] | 1,652
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
As part of #16308, this PR adds support for exporting `Perceiver` to ONNX ๐
It introduces support for the following features:
- `masked-lm`, e.g. with `python -m transformers.onnx --feature=masked-lm --model=deepmind/language-perceiver export`
- `sequence-classification`, e.g. with `python -m transformers.onnx --feature=sequence-classification --model=deepmind/language-perceiver export`
- `image-classification`, e.g. with `python -m transformers.onnx --feature=image-classification --model=deepmind/vision-perceiver-conv export`
To achieve this, I made the following changes:
- Added `PerceiverOnnxConfig`.
- Changed parts of the modelling. The operations `.T`, `torch.broadcast_to` and `torch.moveaxis` aren't currently supported to be exported to ONNX by PyTorch, so I built some workarounds.
- Changed the modality check in `onnx.__main__.py` since the model type `perceiver` can have either tokenizer or feature extractor, depending on the concrete model. (There might be a better way to achieve this than my try-except construction.)
- Added Perceiver to ONNX `FeaturesManager`.
- Added Perceiver to `test_onnx_v2.py`.
## Limitations
The `AutoModel` for Perceiver doesn't work without any preprocessors:
```python
model = AutoModel.from_pretrained("deepmind/language-perceiver")
tokenizer = AutoTokenizer.from_pretrained("deepmind/language-perceiver")
tokd = tokenizer("Rhubarb", return_tensors="pt")
tokd["inputs"] = tokd.pop("input_ids")
model(**tokd)
```
gives this error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/Users/patrick/Projects/open-source/transformers/notebooks/perceiver-onnx.ipynb Cell 10' in <cell line: 6>()
4 tokd = tokenizer("Rhubarb", return_tensors="pt")
5 tokd["inputs"] = tokd.pop("input_ids")
----> 6 model(**tokd)
File ~/.pyenv-x86/versions/transformers-x86/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File ~/Projects/open-source/transformers/src/transformers/models/perceiver/modeling_perceiver.py:866, in PerceiverModel.forward(self, inputs, attention_mask, subsampled_output_points, head_mask, output_attentions, output_hidden_states, return_dict)
864 inputs_without_pos = None
865 if inputs.size()[-1] != self.config.d_model:
--> 866 raise ValueError(
867 f"Last dimension of the inputs: {inputs.size()[-1]} doesn't correspond to config.d_model: {self.config.d_model}. "
868 "Make sure to set config.d_model appropriately."
869 )
871 batch_size, seq_length, _ = inputs.size()
872 device = inputs.device
ValueError: Last dimension of the inputs: 9 doesn't correspond to config.d_model: 768. Make sure to set config.d_model appropriately.
```
An embedding is needed, which is implemented in [`PerceiverTextPreprocessor`](https://huggingface.co/docs/transformers/main/en/model_doc/perceiver#transformers.models.perceiver.modeling_perceiver.PerceiverTextPreprocessor), but not included in the default `PerceiverModel`. Therefore, the ONNX export in this PR doesn't include the `default` feature either.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case: #16308
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? โ Added model to `test_onnx_v2.py`.
## Who can review?
@lewtun @ChainYo Could you have a look? ๐ค
@NielsRogge I made some minor changes to your code in `modeling_perceiver.py`. Maybe you'd like to have a look too.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17213/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17213/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17213",
"html_url": "https://github.com/huggingface/transformers/pull/17213",
"diff_url": "https://github.com/huggingface/transformers/pull/17213.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17213.patch",
"merged_at": 1654256422000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17212
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17212/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17212/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17212/events
|
https://github.com/huggingface/transformers/pull/17212
| 1,233,963,311
|
PR_kwDOCUB6oc43uCUK
| 17,212
|
update BART docs
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
MEMBER
| null |
# What does this PR do?
Update bart `deocder_attention_mask` docstring.
Fixes #17191
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17212/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17212",
"html_url": "https://github.com/huggingface/transformers/pull/17212",
"diff_url": "https://github.com/huggingface/transformers/pull/17212.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17212.patch",
"merged_at": 1652379916000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17211
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17211/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17211/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17211/events
|
https://github.com/huggingface/transformers/issues/17211
| 1,233,915,338
|
I_kwDOCUB6oc5JjA3K
| 17,211
|
CUDA out of memory in Seq2SeqTrainer class
|
{
"login": "kritika121",
"id": 38664807,
"node_id": "MDQ6VXNlcjM4NjY0ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/38664807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kritika121",
"html_url": "https://github.com/kritika121",
"followers_url": "https://api.github.com/users/kritika121/followers",
"following_url": "https://api.github.com/users/kritika121/following{/other_user}",
"gists_url": "https://api.github.com/users/kritika121/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kritika121/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kritika121/subscriptions",
"organizations_url": "https://api.github.com/users/kritika121/orgs",
"repos_url": "https://api.github.com/users/kritika121/repos",
"events_url": "https://api.github.com/users/kritika121/events{/privacy}",
"received_events_url": "https://api.github.com/users/kritika121/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @kritika121,\r\n\r\nIt seems like you don't have enough GPU memory to run your training. Do you maybe have access to a bigger GPU? Otherwise you can try reducing the batch_size, enabling [gradient_checkpointing](https://huggingface.co/docs/transformers/main/en/performance#gradient-checkpointing) or training in [fp16](https://huggingface.co/docs/transformers/main/en/performance#gradient-checkpointing) to save memory.",
"Thanks @patrickvonplaten I have batch_size of 2 and fp16 is set too. I tried enabling gradient_checkpointing and it worked for me. Thanks for your help!! \r\n\r\nClosing the issue."
] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.11.0
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- PyTorch version (GPU?): 1.10.2+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@sgugger @patrickvonplaten
Hello! I am trying to finetune "sshleifer/distill-pegasus-xsum-16-4" model for a seq2seq2 generation task(specifically summarization) on my own custom dataset(~1800 training data points) using hugging face transformers Seq2SeqTrainer but encountered CUDA OOM error.
I am trying to follow the [finetune-summarization notebook](https://github.com/huggingface/notebooks/blob/main/examples/summarization.ipynb) mentioned by @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Libraries
```bash
import transformers
from datasets import load_dataset, load_metric
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer
import nltk
import numpy as np
```
Data
```bash
data_files = {
"train": "data/train.jsonl",
"validation": "data/val.jsonl"
}
raw_datasets = load_dataset('json', data_files=data_files)
```
Load tokenizer and model
```bash
model_checkpoint = 'sshleifer/distill-pegasus-xsum-16-4'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
```
Process Data
```bash
if model_checkpoint in ["t5-small", "t5-base", "t5-larg", "t5-3b", "t5-11b"]:
prefix = "summarize: "
else:
prefix = ""
max_input_length = 1024
max_target_length = 128
def preprocess_function(examples):
inputs = [prefix + doc for doc in examples["document"]]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(examples["summary"], max_length=max_target_length, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)
```
Trainer
```bash
metric = load_metric("rouge")
batch_size = 2
model_name = model_checkpoint.split("/")[-1]
args = Seq2SeqTrainingArguments(
f"{model_name}-finetuned-xsum",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
save_total_limit=2,
num_train_epochs=1,
predict_with_generate=True,
fp16=True,
)
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
def compute_metrics(eval_pred):
predictions, labels = eval_pred
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Rouge expects a newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
# Add mean generated length
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
result["gen_len"] = np.mean(prediction_lens)
return {k: round(v, 4) for k, v in result.items()}
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
trainer.train()
```
Error
```bash
The following columns in the training set don't have a corresponding argument in `PegasusForConditionalGeneration.forward` and have been ignored: summary, document.
***** Running training *****
Num examples = 1599
Num Epochs = 1
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 2
Gradient Accumulation steps = 1
Total optimization steps = 800
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-33-3435b262f1ae> in <module>
----> 1 trainer.train()
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1310 tr_loss_step = self.training_step(model, inputs)
1311 else:
-> 1312 tr_loss_step = self.training_step(model, inputs)
1313
1314 if args.logging_nan_inf_filter and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step)):
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in training_step(self, model, inputs)
1837 if self.use_amp:
1838 with autocast():
-> 1839 loss = self.compute_loss(model, inputs)
1840 else:
1841 loss = self.compute_loss(model, inputs)
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1871 else:
1872 labels = None
-> 1873 outputs = model(**inputs)
1874 # Save past state if it exists
1875 # TODO: this needs to be fixed and made cleaner later.
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1391 output_attentions=output_attentions,
1392 output_hidden_states=output_hidden_states,
-> 1393 return_dict=return_dict,
1394 )
1395 lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1226 output_attentions=output_attentions,
1227 output_hidden_states=output_hidden_states,
-> 1228 return_dict=return_dict,
1229 )
1230 # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
796 attention_mask,
797 layer_head_mask=(head_mask[idx] if head_mask is not None else None),
--> 798 output_attentions=output_attentions,
799 )
800
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, hidden_states, attention_mask, layer_head_mask, output_attentions)
320 attention_mask=attention_mask,
321 layer_head_mask=layer_head_mask,
--> 322 output_attentions=output_attentions,
323 )
324 hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)
207 # self_attention
208 key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
--> 209 value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
210
211 if self.is_decoder:
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/linear.py in forward(self, input)
101
102 def forward(self, input: Tensor) -> Tensor:
--> 103 return F.linear(input, self.weight, self.bias)
104
105 def extra_repr(self) -> str:
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1846 if has_torch_function_variadic(input, weight, bias):
1847 return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias)
-> 1848 return torch._C._nn.linear(input, weight, bias)
1849
1850
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.76 GiB total capacity; 13.65 GiB already allocated; 11.75 MiB free; 13.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
The error can be re-produced on loading any open-source summarization dataset. `
raw_datasets = load_dataset("xsum")
`
### Expected behavior
```shell
Finetune the summarization model.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17211/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17210
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17210/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17210/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17210/events
|
https://github.com/huggingface/transformers/pull/17210
| 1,233,914,424
|
PR_kwDOCUB6oc43t38T
| 17,210
|
Add test to ensure models can take int64 inputs
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I added small patches to all the models that were failing this test. In all cases, the patch shouldn't break `int32` inputs, since I just cast the dtype of the other operand (which was previously invisibly hardcoded as `tf.int32`) to the dtype of the input tensor."
] | 1,652
| 1,652
| 1,652
|
MEMBER
| null |
Like it says on the tin, this is a test to ensure that our models can take `tf.int64` inputs. I expect this will cause some models to break, in which case this PR will also include patches to ensure that they can take `tf.int64`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17210/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17210",
"html_url": "https://github.com/huggingface/transformers/pull/17210",
"diff_url": "https://github.com/huggingface/transformers/pull/17210.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17210.patch",
"merged_at": 1652368165000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17209
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17209/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17209/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17209/events
|
https://github.com/huggingface/transformers/issues/17209
| 1,233,869,721
|
I_kwDOCUB6oc5Ji1uZ
| 17,209
|
Thanks
|
{
"login": "adsic2u",
"id": 98673537,
"node_id": "U_kgDOBeGjgQ",
"avatar_url": "https://avatars.githubusercontent.com/u/98673537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adsic2u",
"html_url": "https://github.com/adsic2u",
"followers_url": "https://api.github.com/users/adsic2u/followers",
"following_url": "https://api.github.com/users/adsic2u/following{/other_user}",
"gists_url": "https://api.github.com/users/adsic2u/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adsic2u/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adsic2u/subscriptions",
"organizations_url": "https://api.github.com/users/adsic2u/orgs",
"repos_url": "https://api.github.com/users/adsic2u/repos",
"events_url": "https://api.github.com/users/adsic2u/events{/privacy}",
"received_events_url": "https://api.github.com/users/adsic2u/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You're welcome! :hugs: ",
"Thanks again, not quite sure what we're doing ",
"> You're welcome! :hugs: \n\nOnce we were mere men!, now, we are much less!?"
] | 1,652
| 1,657
| 1,652
|
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17209/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17208
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17208/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17208/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17208/events
|
https://github.com/huggingface/transformers/issues/17208
| 1,233,824,046
|
I_kwDOCUB6oc5Jiqku
| 17,208
|
Add Visual Question Answering (VQA) pipeline
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Tagging @mishig25 for the widget ",
"Also LXMERT should handle this task, but likely has a very different API.",
"This sounds amazing. Happy to contribute in anyway I can",
"I'd love to pick this up!",
"Hey @sijunhe, I'm just starting out in open-source, but I'd like to help out however I can! ",
"@sabarish-srinivasan appreciate the help but I saw this a little late and I am almost done with the PR.\r\n",
"@sijunhe No problem, thanks for letting me know! ",
"@LysandreJik I looked at both ViLT and LXMERT and I don't think it's possible to combine these two into a single pipeline for the following reasons:\r\n\r\n1. ViLT formats VQA as a classification task and LXMERT formats VQA as a squad-like QA task. It'd be hard to write a common post-processing\r\n2. ViLT is self-contained within transformers but LXMERT expects some faster-RCNN model to generate the visual features that goes into the model.",
"Yes, don't think we should support LXMERT for the pipeline, since it isn't entirely included in the Transformers library.",
"Sounds good, let's go with ViLT then!",
"Now that #17286 is merged, this issue should be closed now?",
"Yes :) Thank you for your contribution @sijunhe!"
] | 1,652
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
### Feature request
We currently have [ViLT](https://huggingface.co/docs/transformers/model_doc/vilt) in the library, which, among other tasks, is capable of performing visual question answering (VQA).
It would be great to have a pipeline for this task, with the following API:
```
from transformers import pipeline
pipe = pipeline("vqa")
pipe("cats.png", "how many cats are there?")
```
This pipeline could default to the https://huggingface.co/dandelin/vilt-b32-finetuned-vqa checkpoint. Also check out the [Space](https://huggingface.co/spaces/nielsr/vilt-vqa) that showcases the model.
This can be implemented similar to [other pipelines](https://github.com/huggingface/transformers/tree/main/src/transformers/pipelines). For an example PR that added a pipeline, see #11598.
### Motivation
A pipeline is required in order to have inference widgets + a task defined at hf.co/tasks.
Moreover, it would be great to do VQA in two lines of code.
### Your contribution
I can definitely assist in this, together with @Narsil, who's the pipeline expert.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17208/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17207
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17207/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17207/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17207/events
|
https://github.com/huggingface/transformers/issues/17207
| 1,233,758,882
|
I_kwDOCUB6oc5Jiaqi
| 17,207
|
Add UL2: Unifying Language Learning Paradigms
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"cc @stefan-it @peregilk @agemagician @stancld @edugp FYI might be interesting for you as well :-) ",
"Is anyone working on porting UL2 to `transformers` already? If not, I am interested in porting it.",
"Hey @manuelciosici - think @kamalkraj is working on it . Maybe you guys can sync on how to collaborate? :-) \r\n\r\nHappy to help in any way! ",
"Hi @manuelciosici,\r\n\r\nI am trying to understand the [t5x](https://github.com/google-research/t5x) library and loading the model.\r\n\r\nWe can work together. You can ping me on slack/discord\r\n",
"@manuelciosici and @kamalkraj. I am about to start some UL2 training in t5x. I might also contribute here. ",
"Hello @kamalkraj, regarding the `t5x` library (loading model, etc.), I've done some inference with `LongT5` model in my repo [here](https://github.com/stancld/longt5-eval).",
"Thank you so much @stancld \r\n",
"> @manuelciosici and @kamalkraj. I am about to start some UL2 training in t5x. I might also contribute here.\r\n\r\nHi @manuelciosici ,\r\nDid you start fine-tuning? Did you identify the t5 gin file required for it.\r\n\r\nThey have only released `ul2` gin file. Not the full set. \r\nhttps://github.com/google-research/google-research/issues/1101\r\n",
"@kamalkraj Unfortunately, I was handed a tight deadline, so I won't be able to look into UL2 until July.",
"no worries! Anybody interested in taking over the UL2 implementation ? Would be happy to help :-)",
"I can take a stab at this in the next week if no one else is actively working on it!\r\n\r\nI'll hopefully open a PR soon - help is welcome from anyone who would like as well :)",
"I've had the model running locally for a while but didn't get around to pushing it to the hub until now ๐
\r\n\r\nwith #17420, merged into master the architecture is already supported (in 4.20).\r\n\r\nI've put the weights here for now:\r\nhttps://huggingface.co/Seledorn/ul2\r\n\r\nI think what remains is mostly verifying that we get identical output with the port and the original model. But this is as @kamalkraj noted a bit difficult without the complete gin files. Though the model does give me reasonable outputs, so I believe the conversion is at least mostly correct.\r\n",
"That's amazing @DanielHesslow - I'll check them out this week!",
"Great job on porting the model!",
"Google released weights for 3 UL2 checkpoints. I'm assuming the model in HuggingFace corresponds to the last checkpoint, but just to make sure, that is correct right?",
"Yes that's true as I know! cc @DanielHesslow just to be sure as he has ported the checkpoint :-)",
"Yeah it's the latest one",
"Hello :hand: are you aware of any implementation of the Mixture-of-Denoisers loss? preferably with HF compatibility. Thanks in any case!",
"We haven't added this one yet - would you like to open a feature request / PR for it maybe? :-) "
] | 1,652
| 1,674
| 1,655
|
CONTRIBUTOR
| null |
### Model description
UL2 is a unified framework for pretraining models that are universally effective across datasets and setups. UL2 uses Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Code and weights (20 billion parameter models): https://github.com/google-research/google-research/tree/master/ul2
The code is based on T5x (which is JAX/FLAX): https://github.com/google-research/t5x
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17207/reactions",
"total_count": 5,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/17207/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17206
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17206/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17206/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17206/events
|
https://github.com/huggingface/transformers/pull/17206
| 1,233,693,982
|
PR_kwDOCUB6oc43tIng
| 17,206
|
Traced models serialization and torchscripting fix
|
{
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There seems to remain a few failures in `torch.fx` tests; the PR can be merged after those are solved!",
"@michaelbenayoun I'd like to propose some additional fixes that we discovered were needed to properly trace `T5ForConditionalGeneration`:\r\n\r\nhttps://github.com/jamesr66a/transformers/commit/1a75148346cb471267b59fe7b473f304f6a02691\r\n\r\nCan these be added?",
"Yes, for some reason the tests do not pass for torch 1.11 (I tested locally on torch 1.10).\r\nI will add those changes too.",
"@michaelbenayoun Can I propose one final change to switch the graph surgery workaround to only trigger on older PyTorch versions where it's relevant?\r\n\r\nhttps://github.com/jamesr66a/transformers/commit/5ac7bb737d9c9806051311a16f1799102d456fb8\r\n\r\nOtherwise, when we're working on PyTorch nightly, this^ code breaks because it's trying to remove nodes that still have uses",
"@jamesr66a I added the gating, but only from version 1.12 as it was failing otherwise.",
"@michaelbenayoun Unfortunately, bumping the version check up to `1.12` breaks us. Actually, that was indirectly working around a semantic issue with deleting the concrete arg node. Do you mind augmenting the patch with this:\r\n\r\nhttps://github.com/pbelevich/transformers/commit/e3fce52d1e14b1f75941fcaca7cd3029a6128016",
"@sgugger Replaced the creation of a new flag by setting a special value for the `fx_compatible` flag for models that can be traced but not torchscipted (-1). \r\nThis flag should take a boolean value 99% of the time anyways.",
"@michaelbenayoun This doesn't really work either. I'm not trying to be gratuitously painful here, but the common model tester is at the core of our test suite for the new model addition PRs. Those PRs are huge, and it only thanks to a robust CI that we can make sure the models added actually work with the whole API Transformers offers.\r\n\r\nAdding a new flag, or a magic value for an existing flag, just because there is one model that needs different testing is not something we usually do or allow. In both cases, either the contributor or the reviewer will have no idea what your new flag/magic value does, especially since there is no documentation of it anywhere.\r\n\r\nAs I said before, in those instances where we need to adapt a common test to a specific model, we override it in the tester of said model. cc @LysandreJik and @patrickvonplaten "
] | 1,652
| 1,653
| 1,653
|
MEMBER
| null |
# What does this PR do?
- Fixes the issue that was preventing traced models to be TorchScripted
- Fixes the issue that was preventing trace models serialization
- Fixes get_attr issues
Fixes #15974
@jamesr66a Can you try on your end and validate that it solves your issues?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17206/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17206/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17206",
"html_url": "https://github.com/huggingface/transformers/pull/17206",
"diff_url": "https://github.com/huggingface/transformers/pull/17206.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17206.patch",
"merged_at": 1653321041000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17205
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17205/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17205/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17205/events
|
https://github.com/huggingface/transformers/pull/17205
| 1,233,690,206
|
PR_kwDOCUB6oc43tHzv
| 17,205
|
[WIP] add MobileViT model
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I messed up. Made a new PR instead: https://github.com/huggingface/transformers/pull/17354"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Add the MobileViT model to Transformers. This is a computer vision model that combines CNNs with transformers: https://machinelearning.apple.com/research/vision-transformer
The model comes in three sizes: small, extra small, and xx-small. There are two heads: image classification and semantic segmentation. Object detection will be added later.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (Internal discussion on Slack.)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17205/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17205",
"html_url": "https://github.com/huggingface/transformers/pull/17205",
"diff_url": "https://github.com/huggingface/transformers/pull/17205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17205.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17204
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17204/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17204/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17204/events
|
https://github.com/huggingface/transformers/pull/17204
| 1,233,643,334
|
PR_kwDOCUB6oc43s_pA
| 17,204
|
[Kernel Fusion] Training benchmarks of Torchdynamo + AOTAutograd + NVFuser(many models)
|
{
"login": "Chillee",
"id": 6355099,
"node_id": "MDQ6VXNlcjYzNTUwOTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6355099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Chillee",
"html_url": "https://github.com/Chillee",
"followers_url": "https://api.github.com/users/Chillee/followers",
"following_url": "https://api.github.com/users/Chillee/following{/other_user}",
"gists_url": "https://api.github.com/users/Chillee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Chillee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Chillee/subscriptions",
"organizations_url": "https://api.github.com/users/Chillee/orgs",
"repos_url": "https://api.github.com/users/Chillee/repos",
"events_url": "https://api.github.com/users/Chillee/events{/privacy}",
"received_events_url": "https://api.github.com/users/Chillee/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2604155188,
"node_id": "MDU6TGFiZWwyNjA0MTU1MTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Benchmarks",
"name": "Benchmarks",
"color": "2DF372",
"default": false,
"description": "Issues related to Memory regressions in tests and scripts"
},
{
"id": 2690307185,
"node_id": "MDU6TGFiZWwyNjkwMzA3MTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Performance",
"name": "Performance",
"color": "207F32",
"default": false,
"description": ""
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Installation Instructions:\r\n```\r\n# install torch-nightly\r\nconda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch-nightly\r\n\r\n# install functorch (and reinstall after `git pull` later if need to sync up)\r\ngit clone https://github.com/pytorch/functorch\r\ncd functorch\r\nrm -rf build\r\npip install -e .[aot]\r\n\r\ncd ..\r\ngit clone https://github.com/pytorch/torchdynamo\r\ncd torchdynamo\r\npip install -r requirements.txt\r\npython setup.py develop\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17204). All of your documentation changes will be reflected on that endpoint.",
"I was able to reproduce the speed ups/memory compression. great work, @Chillee and @anijain2305!\r\n\r\nSo is this planned to be officially released in pt-1.12? \r\n\r\nWhen will the API be stable that is and then we can start integrating / writing examples for the users?\r\n\r\n",
"> So is this planned to be officially released in pt-1.12?\r\n\r\nWe plan to have an \"release\" of torchdynamo and functorch that correspond to the official PyTorch 1.12 release, yes. We'll also be building binaries of functorch (and possibly dynamo) for easier install. However, this is different from a \"stable\" release in PyTorch core (where we make BC guarantees, will announce it officially, etc.)\r\n\r\n> When will the API be stable that is and then we can start integrating / writing examples for the users?\r\n\r\nI think the API is likely stable enough (and in general, the API surface of Dynamo is fairly minimal from the user side!). We can likely commit to supporting the usages of Torchdynamo in HF (which I expect to primarily just involve turning on context managers), although cc: @jansel on this point..",
"Thank you for clarifying, Horace.\r\n\r\nWe can of course make experimental feature support and tag it as such as we won't want to maintain multiple APIs if they will change in pt-1.13.\r\n\r\n",
"> we won't want to maintain multiple APIs if they will change in pt-1.13.\r\n\r\nThe APIs (from the torchdynamo side) should be stable.",
"Continuing from our discussion on slack so that others can see and participate:\r\n\r\n> Horace: How this can be integrated into `transformers`:\r\n\r\nThere are 2 ways HF transformers are used:\r\n\r\n1. a user writing their own training loop - and just use the model - we will document how they can enable TorchDynamo there - this is the easiest as there no API to create on our side and BC to support - just keeping the docs and examples up-to-date\r\n\r\n2. a user using HF Trainer or Accelerate - there we would need to add a flag which will turn TorchDynamo on automatically, same as a user chooses which --optim to use - here we have to be careful with designing a backward compatible API - your team and us will need to discuss the various options that the user should be able to set via cmd line - ideally a single flag that can have multiple values - as the there is already a myriad of options so we would want to keep it tight.\r\n\r\n----------------\r\n\r\nLet me ping @sgugger - Sylvain, do you feel we could integrate this into HF Trainer and Accelerate? It's just a few lines of code that make the code run faster and use less memory - with some models having little to no improvements and others with much larger impacts - please see the OP for the benchmark table.\r\n\r\nbefore:\r\n```\r\n out = model(**train_inputs).loss.abs().sum()\r\n```\r\nafter:\r\n```\r\nimport torchdynamo\r\nfrom torchdynamo.optimizations.training import aot_autograd_speedup_strategy\r\n[...]\r\n with torchdynamo.optimize(aot_autograd_speedup_strategy):\r\n out = model(**train_inputs).loss.abs().sum()\r\n```\r\n\r\nHorace is saying that for pt-1.12 it'd be just:\r\n\r\n```\r\nimport torchdynamo\r\n[...]\r\n with torchdynamo.optimize(โnvfuserโ):\r\n out = model(**train_inputs).loss.abs().sum() \r\n```\r\n",
"I don't mind adding an integration to the `Trainer` and/or `Accelerate`, it looks like it's just a matter of adapting the context manager [here](https://github.com/huggingface/transformers/blob/18d6b356c5a0b800907fe19860b4644db95ea46b/src/transformers/trainer.py#L2187) and we do have utils to create lists of context managers.\r\n\r\nIn terms of control, users are starting to be a bit confused with the huge number of training arguments we have, so trying to keep the flags/args of this new feature to a bare minimum would be great!",
"I need to adapt the install instructions to make it easy to build on nightly CI, I think this should do:\r\n\r\n```\r\npip install git+https://github.com/pytorch/functorch#egg=functorch[aot]\r\npip install git+https://github.com/pytorch/torchdynamo\r\n```\r\n\r\n@Chillee, could you please validate that I'm not missing anything? The original was:\r\n\r\n```\r\n\r\n# install functorch (and reinstall after `git pull` later if need to sync up)\r\ngit clone https://github.com/pytorch/functorch\r\ncd functorch\r\nrm -rf build\r\npip install -e .[aot]\r\n\r\ncd ..\r\ngit clone https://github.com/pytorch/torchdynamo\r\ncd torchdynamo\r\npip install -r requirements.txt\r\npython setup.py develop\r\n```\r\n\r\n--------------\r\n\r\nActually, shouldn't `python setup.py develop` be `pip install -e .` for consistency for the latter?\r\n\r\nThank you!\r\n",
"@stas00 @Chillee unfortunately, these commands above do not do the job. Any chance you guys can update the instructions on correctly installing torchdynamo? Getting issues with different versions of the torch ( even if I install nightly).",
"Note that in the nightlies, `dynamo` is included in PyTorch as `import torch._dynamo as dynamo`. The latest instructions I have are:\r\n```py\r\npip install numpy\r\npip install --pre torch[dynamo] --extra-index-url https://download.pytorch.org/whl/nightly/cu117/\r\n```",
"\r\nThanks for the info @sgugger \r\n> Note that in the nightlies, `dynamo` is included in PyTorch as `import torch._dynamo as dynamo`. The latest instructions I have are:\r\n\r\nThere is something wrong going on though. When I try to install nightlies with the command that you provided above (also tried to --force reinstall), it still complains like below. \r\n\r\n`ModuleNotFoundError: No module named 'torch._dynamo'\r\n`\r\n\r\nIt is weird that it worked once few days back and I could run all sanity checks passed with torchdynamo, then after an hour, I tried to rerun the experiment and got the error above. Looks like it is not stable at all, at least for now."
] | 1,652
| 1,668
| null |
NONE
| null |
Note to maintainers: We are using this PR to collaborate and there is no intention yet to merge anything, so please ignore unless you want to experiment with the latest auto-speedups.
## What was the issue with the previous AOTAutograd integration?
So, there was some investigation into applying AOTAutograd a couple months ago in this PR (https://github.com/huggingface/transformers/pull/15264). Although the performance results were quite promising, @stas00 and I found one major blocker - the potential for incorrect semantics. AOTAutograd is a tracing-based approach, and as such, it's fairly difficult for it to guarantee that its semantics are always correct. For example, data-dependent control flow, use of third-party libraries (like Numpy), or modification of global state all posed problems for integrating AOTAutograd into HuggingFace. Considering that HF has >100 models (and is adding more every day!), the burden of needing to ensure that AOTAutograd produces correct results would have been quite burdensome.
## TorchDynamo to the rescue
Luckily, now, there's another solution in the form of [Torchdynamo](https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361) (from @jansel)! In contrast to tracing based approaches like `jit.trace` and AOTAutograd, Torchdynamo is *sound* - it should never produce incorrect results (modulo bugs). In comparison to approaches like `jit.script`, Torchdynamo is much more *complete* - it should allow any PyTorch code to be able to run, although it may not always speed things up.
The central approach that TorchDynamo takes is that as opposed to trying to live at the AST level (i.e. `jit.script`) or the object-level (i.e. tracing like `jit.trace`), it lives at the Python bytecode level. This is similar to the approach that language JITs like Javascript's V8 or JVM's Hotspot take. By living at this level, it's able to ensure that it can support *all* Python, as it can always fall back to eager-mode execution. Let's take an example of some code that would have been very problematic previously.
```
def f(x):
a = x * 2
b = a + torch.from_numpy(np.randn(5))
if b.sum() > 0:
return d.sin().sin()
else:
return d.cos().cos()
```
Not only does this have data-dependent control flow - it also has calls to external libraries that aren't PyTorch! (numpy in this case). TorchDynamo (morally) would rewrite this code into something like this:
```
def block1(x, np_tensor):
a = x * 2
b = a + np_tensor
return b
def block2(b):
return b.sin().sin()
def block3(b):
return b.sin().sin()
def f_dynamo(x):
b = block1(x, torch.from_numpy(np.randn(5))
if b.sum() > 0:
return block2(b)
else:
return block3(b)
```
Note that `block1`, `block2`, and `block3` are just simple straight line functions - exactly what AOTAutograd can handle! So, we can now apply AOTAutograd to each of those blocks.
In this way, TorchDynamo and AOTAutograd complement each other - TorchDynamo resolves the dynamic/non-traceable behavior that AOTAutograd can't handle, and AOTAutograd then provides static compilation that handles things like PyTorch's autograd.
So, what do you need to do to use AOTAutograd with Torchdynamo? Well, it's simple!
```
import torchdynamo
from torchdynamo.optimizations.training import aot_autograd_speedup_strategy
with torchdynamo.optimize(aot_autograd_speedup_strategy):
# run your model here!
```
All of the above is simply to capture the graphs in the first place. However, after capturing the graphs, we need to actually speed them up. To do so, we pass them to [NVFuser](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31952/), a PyTorch-native compiler for GPUs.
## Results
This script primarily comes from a great effort from @anijain2305. However, I want to note a couple of things.
1. In contrast with the pure AOTAutograd integration, where our benchmark only covered 3.5 models (and had some tricky to debug correctness issues), it was fairly trivial to extend this benchmarking to 14 models (with correctness testing for all of them!) In fact, the main bottleneck to adding more is just figuring out how to run more models (I pretty much exhausted all of the AutoConfig ones I could run easily).
2. For the most part, TorchDynamo + AOTAutograd improves both performance and memory usage. On some models, quite significantly (1.4x+ for MobileBert, FNet, and Albert), but it generally improves performance for nearly all models.
3. For many of these models, we *can't* produce a single graph to compile, often due to Numpy usage. Here, it's crucial that torchdynamo passes multiple graphs to AOTAutograd.
4. Currently, we feed the graphs produced by TorchDynamo and AOTAutograd into NVFuser. But, in the future, other backends should have no issues integrating into this as well (and in fact, we *have* some extra integrations, like TensorRT).
Run on A100:
```
$ python hf_dynamo_aot.py --run-dynamo-aot-efficient --nvfuser
```
Results:
| model | dtype | name | time (s) | mem (GB) | speedup | mem comp ression |
|:---------------------------|:---------------|---------------------|-----------:|-----------:|----------:|------------------:|
| BertForMaskedLM | float32 |eager | 0.040 | 3.521 | 1.000 | 1.000 |
| BertForMaskedLM | float32 |dynamo_aot_efficient | 0.037 | 3.516 | 1.094 | 1.001 |
| BertForMaskedLM | float16 |eager | 0.027 | 1.880 | 1.000 | 1.000 |
| BertForMaskedLM | float16 |dynamo_aot_efficient | 0.023 | 1.885 | 1.155 | 0.997 |
| BertForMaskedLM | bfloat16 |eager | 0.027 | 1.874 | 1.000 | 1.000 |
| BertForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.023 | 1.867 | 1.154 | 1.003 |
| AlbertForMaskedLM | float32 |eager | 0.081 | 6.070 | 1.000 | 1.000 |
| AlbertForMaskedLM | float32 |dynamo_aot_efficient | 0.056 | 3.943 | 1.442 | 1.539 |
| AlbertForMaskedLM | float16 |eager | 0.046 | 2.908 | 1.000 | 1.000 |
| AlbertForMaskedLM | float16 |dynamo_aot_efficient | 0.035 | 1.971 | 1.338 | 1.475 |
| AlbertForMaskedLM | bfloat16 |eager | 0.048 | 2.866 | 1.000 | 1.000 |
| AlbertForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.035 | 1.972 | 1.374 | 1.453 |
| GPT2LMHeadModel | float32 |eager | 0.055 | 4.632 | 1.000 | 1.000 |
| GPT2LMHeadModel | float32 |dynamo_aot_efficient | 0.043 | 3.791 | 1.280 | 1.222 |
| GPT2LMHeadModel | float16 |eager | 0.036 | 2.426 | 1.000 | 1.000 |
| GPT2LMHeadModel | float16 |dynamo_aot_efficient | 0.029 | 2.018 | 1.213 | 1.203 |
| GPT2LMHeadModel | bfloat16 |eager | 0.036 | 2.425 | 1.000 | 1.000 |
| GPT2LMHeadModel | bfloat16 |dynamo_aot_efficient | 0.030 | 1.998 | 1.208 | 1.214 |
| LongformerForMaskedLM | float32 |eager | 0.121 | 4.591 | 1.000 | 1.000 |
| LongformerForMaskedLM | float32 |dynamo_aot_efficient | 0.120 | 4.585 | 1.006 | 1.001 |
| LongformerForMaskedLM | float16 |eager | 0.096 | 2.711 | 1.000 | 1.000 |
| LongformerForMaskedLM | float16 |dynamo_aot_efficient | 0.096 | 2.705 | 1.005 | 1.002 |
| T5ForConditionalGeneration | float32 |eager | 0.103 | 8.300 | 1.000 | 1.000 |
| T5ForConditionalGeneration | float32 |dynamo_aot_efficient | 0.098 | 7.831 | 1.050 | 1.060 |
| DistilBertForMaskedLM | float32 |eager | 0.045 | 3.492 | 1.000 | 1.000 |
| DistilBertForMaskedLM | float32 |dynamo_aot_efficient | 0.043 | 3.497 | 1.038 | 0.999 |
| DistilBertForMaskedLM | float16 |eager | 0.026 | 1.870 | 1.000 | 1.000 |
| DistilBertForMaskedLM | float16 |dynamo_aot_efficient | 0.027 | 1.871 | 0.963 | 0.999 |
| DistilBertForMaskedLM | bfloat16 |eager | 0.026 | 1.860 | 1.000 | 1.000 |
| DistilBertForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.027 | 1.861 | 0.986 | 1.000 |
| RobertaForMaskedLM | float32 |eager | 0.157 | 12.366 | 1.000 | 1.000 |
| RobertaForMaskedLM | float32 |dynamo_aot_efficient | 0.135 | 12.341 | 1.164 | 1.002 |
| RobertaForMaskedLM | float16 |eager | 0.098 | 6.573 | 1.000 | 1.000 |
| RobertaForMaskedLM | float16 |dynamo_aot_efficient | 0.088 | 6.567 | 1.114 | 1.001 |
| RobertaForMaskedLM | bfloat16 |eager | 0.101 | 6.579 | 1.000 | 1.000 |
| RobertaForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.088 | 6.559 | 1.140 | 1.003 |
| GPT2LMHeadModel | float32 |eager | 0.123 | 9.292 | 1.000 | 1.000 |
| GPT2LMHeadModel | float32 |dynamo_aot_efficient | 0.098 | 7.108 | 1.256 | 1.307 |
| GPT2LMHeadModel | float16 |eager | 0.080 | 4.610 | 1.000 | 1.000 |
| GPT2LMHeadModel | float16 |dynamo_aot_efficient | 0.067 | 3.767 | 1.182 | 1.224 |
| GPT2LMHeadModel | bfloat16 |eager | 0.081 | 4.779 | 1.000 | 1.000 |
| GPT2LMHeadModel | bfloat16 |dynamo_aot_efficient | 0.068 | 3.763 | 1.191 | 1.270 |
| ElectraForMaskedLM | float32 |eager | 0.074 | 6.257 | 1.000 | 1.000 |
| ElectraForMaskedLM | float32 |dynamo_aot_efficient | 0.064 | 6.258 | 1.151 | 1.000 |
| ElectraForMaskedLM | float16 |eager | 0.042 | 3.356 | 1.000 | 1.000 |
| ElectraForMaskedLM | float16 |dynamo_aot_efficient | 0.039 | 3.347 | 1.092 | 1.003 |
| ElectraForMaskedLM | bfloat16 |eager | 0.044 | 3.367 | 1.000 | 1.000 |
| ElectraForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.039 | 3.341 | 1.124 | 1.008 |
| FNetForMaskedLM | float32 |eager | 0.055 | 4.974 | 1.000 | 1.000 |
| FNetForMaskedLM | float32 |dynamo_aot_efficient | 0.038 | 2.802 | 1.429 | 1.775 |
| ConvBertForMaskedLM | float32 |eager | 0.090 | 5.809 | 1.000 | 1.000 |
| ConvBertForMaskedLM | float32 |dynamo_aot_efficient | 0.085 | 5.795 | 1.058 | 1.002 |
| ConvBertForMaskedLM | float16 |eager | 0.064 | 3.021 | 1.000 | 1.000 |
| ConvBertForMaskedLM | float16 |dynamo_aot_efficient | 0.062 | 3.009 | 1.024 | 1.004 |
| MobileBertForMaskedLM | float32 |eager | 0.104 | 2.474 | 1.000 | 1.000 |
| MobileBertForMaskedLM | float32 |dynamo_aot_efficient | 0.069 | 2.576 | 1.499 | 0.961 |
| MobileBertForMaskedLM | float16 |eager | 0.101 | 1.329 | 1.000 | 1.000 |
| MobileBertForMaskedLM | float16 |dynamo_aot_efficient | 0.067 | 1.423 | 1.499 | 0.934 |
| MobileBertForMaskedLM | bfloat16 |eager | 0.100 | 1.330 | 1.000 | 1.000 |
| MobileBertForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.067 | 1.423 | 1.504 | 0.935 |
| CamembertForMaskedLM | float32 |eager | 0.075 | 6.312 | 1.000 | 1.000 |
| CamembertForMaskedLM | float32 |dynamo_aot_efficient | 0.065 | 6.317 | 1.151 | 0.999 |
| CamembertForMaskedLM | float16 |eager | 0.047 | 3.376 | 1.000 | 1.000 |
| CamembertForMaskedLM | float16 |dynamo_aot_efficient | 0.044 | 3.366 | 1.084 | 1.003 |
| CamembertForMaskedLM | bfloat16 |eager | 0.049 | 3.390 | 1.000 | 1.000 |
| CamembertForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.044 | 3.370 | 1.113 | 1.006 |
| LayoutLMForMaskedLM | float32 |eager | 0.077 | 6.305 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | float32 |dynamo_aot_efficient | 0.067 | 6.305 | 1.149 | 1.000 |
| LayoutLMForMaskedLM | float16 |eager | 0.045 | 3.371 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | float16 |dynamo_aot_efficient | 0.042 | 3.373 | 1.089 | 0.999 |
| LayoutLMForMaskedLM | bfloat16 |eager | 0.047 | 3.389 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.042 | 3.371 | 1.118 | 1.005 |
### Limitations
There are a couple of limitations today (that we're working on addressing).
1. Like AOTAutograd, this pipeline currently requires static shape specialization. That is, when the input shapes change, we'll need to recompile.
2. The interaction with PyTorch's distributed features is somewhat untested.
### Reading resources:
AOTAutograd: https://docs.google.com/presentation/d/1rTt0BR2KChDQQTks2hHUtvHxtHQKwgQHVNrmbhj0byk/edit?usp=sharing
TorchDynamo: https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361
Min-Cut rematerialization: https://dev-discuss.pytorch.org/t/min-cut-optimal-recomputation-i-e-activation-checkpointing-with-aotautograd/467/7
NVFuser: https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31952/
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17204/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17204/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17204",
"html_url": "https://github.com/huggingface/transformers/pull/17204",
"diff_url": "https://github.com/huggingface/transformers/pull/17204.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17204.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17203
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17203/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17203/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17203/events
|
https://github.com/huggingface/transformers/pull/17203
| 1,233,603,593
|
PR_kwDOCUB6oc43s3IH
| 17,203
|
fixed bug in run_mlm_flax_stream.py
|
{
"login": "KennethEnevoldsen",
"id": 23721977,
"node_id": "MDQ6VXNlcjIzNzIxOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KennethEnevoldsen",
"html_url": "https://github.com/KennethEnevoldsen",
"followers_url": "https://api.github.com/users/KennethEnevoldsen/followers",
"following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}",
"gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions",
"organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs",
"repos_url": "https://api.github.com/users/KennethEnevoldsen/repos",
"events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"No problem. Tried using the `remove_columns ` argument as in the example but the map functions do not take that argument (probably easy to add):\r\n\r\n```python\r\ntokenized_datasets = dataset.map(\r\n tokenize_function,\r\n batched=True,\r\n remove_columns=column_names,\r\n )\r\n```\r\n\r\nSimilar it might be nice for consistency to add the `.column_names` extensions to the IterableDataset.",
"> No problem. Tried using the `remove_columns ` argument as in the example but the map functions do not take that argument (probably easy to add):\r\n> \r\n> ```python\r\n> tokenized_datasets = dataset.map(\r\n> tokenize_function,\r\n> batched=True,\r\n> remove_columns=column_names,\r\n> )\r\n> ```\r\n> \r\n> Similar it might be nice for consistency to add the `.column_names` extensions to the IterableDataset.\r\n\r\nPinging @lhoestq here. Is not possible to pass remove column to streaming dataset ?",
"`remove_columns` does exist: \r\n\r\nhttps://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.IterableDataset.map.remove_columns",
"Indeed it does. I was using datasets version `1.17.0`. Rerunning with an updated version indeed resolved the issue. ",
"Another note here, seems like the max_seq_length state that each text is truncated to e.g. max_seq_length, but from what I can read in the `advance_iter_and_group_samples` it seems like they are concatenated and split in groups of max_seq_length (with the remainder being discarded)\r\n\r\nI would probably change it from:\r\n```python\r\n max_seq_length: Optional[int] = field(\r\n default=None,\r\n metadata={\r\n \"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\r\n \"than this will be truncated. Default to the max input length of the model.\"\r\n },\r\n )\r\n```\r\nto:\r\n```python\r\n max_seq_length: Optional[int] = field(\r\n default=None,\r\n metadata={\r\n \"help\": \"The maximum total input sequence length after tokenization. Sequences are concatenated in groups of max_seq_length. Default to the max input length of the model.\"\r\n },\r\n )\r\n``` ",
"> Another note here, seems like the max_seq_length state that each text is truncated to e.g. max_seq_length, but from what I can read in the `advance_iter_and_group_samples` it seems like they are concatenated and split in groups of max_seq_length (with the remainder being discarded)\r\n> \r\n> I would probably change it from:\r\n> \r\n> ```python\r\n> max_seq_length: Optional[int] = field(\r\n> default=None,\r\n> metadata={\r\n> \"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\r\n> \"than this will be truncated. Default to the max input length of the model.\"\r\n> },\r\n> )\r\n> ```\r\n> \r\n> to:\r\n> \r\n> ```python\r\n> max_seq_length: Optional[int] = field(\r\n> default=None,\r\n> metadata={\r\n> \"help\": \"The maximum total input sequence length after tokenization. Sequences are concatenated in groups of max_seq_length. Default to the max input length of the model.\"\r\n> },\r\n> )\r\n> ```\r\n\r\nGood catch! Feel free to open another PR if you would like to fix this. "
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixed bug caused by additional keys (id, text) referring to non-list values when concatenating samples. An alternative option is to drop these columns in the dataset before passing it to `iter`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17132 by @HLasse.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten (tagged in bug)
@patil-suraj (assigned to bug)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17203/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17203/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17203",
"html_url": "https://github.com/huggingface/transformers/pull/17203",
"diff_url": "https://github.com/huggingface/transformers/pull/17203.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17203.patch",
"merged_at": 1652701228000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17202
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17202/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17202/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17202/events
|
https://github.com/huggingface/transformers/pull/17202
| 1,233,602,765
|
PR_kwDOCUB6oc43s29H
| 17,202
|
BLOOM
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @stas00 !\r\n\r\nThank you very much for your comment !!\r\n1-\r\nYes it will be changed to BLOOM most probably in the next commit \r\n\r\n2-\r\nI like the idea of making it modulable (enable/disable alibi and embed norm) so that both models would work. But in this case 13b-en could be understood as a smaller version of BLOOM which is not the case. What do you think?\r\n\r\n3-\r\nI managed to convert the small models using this arch therefore should not be a problem to use this arch for them. There is still the question of the naming (should they still be named BLOOM?) ",
"> 1- Yes it will be changed to BLOOM most probably in the next commit\r\n\r\nPerfect!\r\n\r\n> 2- I like the idea of making it modulable (enable/disable alibi and embed norm) so that both models would work. But in this case 13b-en could be understood as a smaller version of BLOOM which is not the case. What do you think?\r\n\r\nThat's a good point. It's a sort of Pre-BLOOM :)\r\n\r\n> 3- I managed to convert the small models using this arch therefore should not be a problem to use this arch for them. There is still the question of the naming (should they still be named BLOOM?)\r\n\r\nOK, let's discuss the last 2 on slack\r\n\r\n",
"2. Maybe something I don't understand is why make it modulable? It feels like having another GPT2 no? 13B should more of less fit inside GPT-2 no? ",
"> 13B should more of less fit inside GPT-2 no?\r\n\r\nWell, we have gone through this 6 months ago, you can definitely re-read the discussions. There were 3 things that needed to be changed in HF's GPT2, which wasn't producing the same output under fp16.",
"Yeah I read it, `layernorm` is fixed by torch, the other two can be implemented there the same way they are implemented here (you just need to make it modular as well no?).",
"Unfortunately the `transformers`'s current policy is against making things modular. So we can't add anything to gpt2 \r\n\r\nI thought that perhaps Bloom could be an exception but I won't be surprised if this will not be allowed.",
"Okay actually the `badbmm` was implemented there as well. So really we're missing only one activation which is jitted `gelu_fast`. Well why would we make this one modular to the point of really being REALLLY close to `gpt-2` then? If so the other course of action is building a new skeleton (which seems overkill for a change in activation).",
"yes already tried pushing for it - and it wasn't approved.",
"@sgugger thank you very much for your comments!\r\nFor the tokenizer since the models has not been pushed yet on the hub, I had to \"hotfix\" this by explicitly giving the path to the [bigscience tokenizer](https://huggingface.co/bigscience/tokenizer). Do you think it is a good idea to push this tokenizer to the [debug model's hub](https://huggingface.co/bigscience/bigscience-small-testing) ?\r\nAlso I have notived that Bloom is the only model on HF that does not have a slow tokenizer *and* has a fast tokenizer (usually it is either both or only the slow tokenizer). ",
"5 more small tests and we should be good!!",
"1 test left!",
"All tests finally passed!! I'll refactor the code with the suggested final changes and may ping you for a new review",
"I will need to modify the slow tests to add our custom ones",
"Just a small note on a test. Due to some stochasticity (since we are taking a random slice) [this test](https://github.com/younesbelkada/transformers/blob/cdf41e8f309a6744f4e1488bffc7be76503ccd6d/tests/models/bloom/test_modeling_bloom.py#L215) does not always pass with `atol=4e-2` (sometimes it passes with `5e-2` or `6e-2`). Therefore I've put `atol=1e-1` to be sure it passes. How accurate are we expecting this test to be? I may be wrong but I think the operations could not match at 100% (due to tensor slicing for example)",
"EDIT: Slow tests seems to work fine on the GPU, but batched generation seem to not work, I have to investigate that!",
"All tests are passing! Let me know if you need any more modification @LysandreJik @sgugger ",
"Thanks @thomasw21 for the comments!\r\nAgreed with you, regarding the alibi positional embeddings. What I'll do I think is to create the positional embeddings on-the-fly on the forward pass (since you have access to the input sequence length there). I was just worried about the computational cost of it (re-computing alibi at each inference step is more costly than computing it once) - but for the reward we get (making the model agnostic to the sequence length) I think that it's worth it\r\n ",
">What I'll do I think is to create the positional embeddings on-the-fly on the forward pass \r\n\r\nI wonder if this may break deepspeed zero-3. All params should be created when the model is created. But if it's not a param it probably should be fine.\r\n\r\nSee this issue: https://github.com/microsoft/DeepSpeed/issues/1757",
"If by param you mean torch.nn.Parameter then the alibi tensor is not a param. Since these embeddings are not learned you can just use them as a non param tensor. I think that it should be fine and will not break deepspeed zero-3",
"Looks good so far! Think we have to revisit the `dtype` config param here though - I'm against adding it to the config IMO the user should define it at runtime by passing `torch_type` to the model and then the layers relevant logic should not be:\r\n\r\n```py\r\nif config.dtype == ...\r\n```\r\nbut rather:\r\n```py\r\nif inputs_embeds.dtype == ...\r\n```\r\n\r\ncc @sgugger @stas00 ",
"> Looks good so far! Think we have to revisit the `dtype` config param here though - I'm against adding it to the config IMO the user should define it at runtime by passing `torch_type` to the model and then the layers relevant logic should not be:\r\n> \r\n> ```python\r\n> if config.dtype == ...\r\n> ```\r\n> \r\n> but rather:\r\n> \r\n> ```python\r\n> if inputs_embeds.dtype == ...\r\n> ```\r\n> \r\n> cc @sgugger @stas00\r\n\r\nThank you for the comments! I agree with the fact that we should stay in line with what is done currently and should not add any extra logic. I have applied your suggested changes, there is no more explicit initialization with the dtype from the config + there should not be any logic such as `if config.dtype == ...` \r\nBut I would still keep the `torch_dtype` param in the config file because in Megatron-DS this parameter is quite important, it helps explicitly keeping track on the precision that has been used during training.\r\n",
"> But I would still keep the torch_dtype param in the config file because in Megatron-DS this parameter is quite important, it helps explicitly keeping track on the precision that has been used during training.\r\n\r\nFYI, `torch_dtype` gets automatically added to all saved models' config files since that feature was added, so you don't need to do anything special about it. Via `save_pretrained` that is.",
"> > But I would still keep the torch_dtype param in the config file because in Megatron-DS this parameter is quite important, it helps explicitly keeping track on the precision that has been used during training.\r\n> \r\n> FYI, `torch_dtype` gets automatically added to all saved models' config files since that feature was added, so you don't need to do anything special about it. Via `save_pretrained` that is.\r\n\r\nYes but for bloom models I save the models + config files using the `convert_bloom_to_pytorch.py`script that does not use `save_pretrained`, that is also why I want to keep them in the config file",
"Ah, yes, in that case - yes, please, to adding manually the correct `torch_dtype` - thank you, @younesbelkada!",
"Added some final changes + suggestions from @thomasw21 ! Thanks to alibi shifting the tests pass with a much lower tolerance.\r\nLGTM now, let me know if you see any other changes",
"Yeah let me close it and create a new PR!",
"Moved the PR at #17474",
"> The git commit history seems to be messed up - should we maybe open a new PR here?\r\n\r\nIt appears to be due to a broken merge commit here: https://github.com/huggingface/transformers/pull/17202/commits/06d98db76f9a958bd68d8158411674ba62856de1 Didn't really need a new PR, but rolling back the bad commit. But oh well it's done.\r\n"
] | 1,652
| 1,654
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
Integrating BigScience converted models into HuggingFace library!
Original PR: https://github.com/thomwolf/transformers/pull/2 that I directly moved here
- [x] add a generation test with a small model pushed on the hub
- [x] slow tests needs to be modified accordingly
- [ ] add final credits to all reviewers
cc @thomasw21 @thomwolf @sgugger @stas00
EDIT: PR moved at https://github.com/huggingface/transformers/pull/17474
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17202/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17202/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17202",
"html_url": "https://github.com/huggingface/transformers/pull/17202",
"diff_url": "https://github.com/huggingface/transformers/pull/17202.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17202.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17201
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17201/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17201/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17201/events
|
https://github.com/huggingface/transformers/issues/17201
| 1,233,601,439
|
I_kwDOCUB6oc5Jh0Of
| 17,201
|
a memory leak in qqp prediction using bart
|
{
"login": "StevenTang1998",
"id": 37647985,
"node_id": "MDQ6VXNlcjM3NjQ3OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StevenTang1998",
"html_url": "https://github.com/StevenTang1998",
"followers_url": "https://api.github.com/users/StevenTang1998/followers",
"following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions",
"organizations_url": "https://api.github.com/users/StevenTang1998/orgs",
"repos_url": "https://api.github.com/users/StevenTang1998/repos",
"events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/StevenTang1998/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"There is nothing we can do to help with that, as you don't seem to have the RAM necessary to hold all predictions. The only advise I have is that you should predict on parts of the dataset and not the whole. \r\n\r\nThis is not a bug in Transformers, removing the label.",
"Sorry, I don't think that. I have 512GB RAM. And I can conduct training and evaluation well, but this issue only occurs during prediction.",
"The training does not accumulate predictions and the evaluation uses the evaluation set which is smaller.",
"But I don't understand why a tensor of shape (300k,) will exceed RAM? Does trainer save intermediate hidden states during prediction?",
"Mmmmmm, it may be that the model is outputting more tensors than just the logits. I see it has `use_cache=True` in its config, can you try again by setting it to `False`?",
"OK! I try it now! Thanks!",
"Sorry, it doesn't work.",
"It takes more time and more memory with every 100 steps.",
"I have found how to solve it!",
"The `logits` in the [https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2635](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2635) returns two tensors of shape (batch_size, num_classes) and (batch_size, max_sequence_length, hidden_size).\r\nAnd the second shape will be accumulated, and finally will become a tensor of (300k, 256, 1024), and it will be out of memory.\r\n\r\nAnd if i don't accumulate it, the code can work well.",
"Yes, that is what I was saying earlier: the model does not return the predictions only but some hidden states. Not sure which option will deactivate the second one.",
"Yes, thanks for your help! And it would be nice if this could be improved."
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I met the same issue #11011. If not using `--eval_accumulation_steps`, it caused CUDA out of memory. If using it, it caused out of RAM and killed by system.
I only did prediction on GLUE QQP dataset using bart without fine-tuning. Considering QQP having a large test set (300k), the prediction got slower and slower, and finally got out of memory.
This is the script to reproduce:
```
CUDA_VISIBLE_DEVICES=0 python run_glue.py --model_name_or_path facebook/bart-large --task_name qqp --output_dir bart-large_qqp --eval_accumulation_steps 100 --do_predict --per_device_eval_batch_size 24
```
### Expected behavior
```shell
Prediction without out memory.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17201/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17201/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17200
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17200/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17200/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17200/events
|
https://github.com/huggingface/transformers/issues/17200
| 1,233,580,636
|
I_kwDOCUB6oc5JhvJc
| 17,200
|
almost all codes that related to generation in examples/pytorch/**_no_trainer.py have bugs
|
{
"login": "Namco0816",
"id": 34687537,
"node_id": "MDQ6VXNlcjM0Njg3NTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/34687537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Namco0816",
"html_url": "https://github.com/Namco0816",
"followers_url": "https://api.github.com/users/Namco0816/followers",
"following_url": "https://api.github.com/users/Namco0816/following{/other_user}",
"gists_url": "https://api.github.com/users/Namco0816/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Namco0816/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Namco0816/subscriptions",
"organizations_url": "https://api.github.com/users/Namco0816/orgs",
"repos_url": "https://api.github.com/users/Namco0816/repos",
"events_url": "https://api.github.com/users/Namco0816/events{/privacy}",
"received_events_url": "https://api.github.com/users/Namco0816/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I believe that's in @muellerzr's todo list!",
"Yes it is, duplicate of https://github.com/huggingface/transformers/issues/17214#event-6600140325\r\n\r\nWill be getting to this on Wednesday once I'm back from vacation"
] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
transformers branch main
```
### Wrong Codes in examples/pytorch/**_no_trainer.py
```
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
generated_tokens = accelerator.unwrap_model(model).generate(
batch["input_ids"],
attention_mask=batch["attention_mask"],
**gen_kwargs,
)
generated_tokens = accelerator.pad_across_processes(
generated_tokens, dim=1, pad_index=tokenizer.pad_token_id
)
labels = batch["labels"]
if not args.pad_to_max_length:
# If we did not pad to max length, we need to pad the labels too
labels = accelerator.pad_across_processes(batch["labels"], dim=1, pad_index=tokenizer.pad_token_id)
generated_tokens, labels = accelerator.gather((generated_tokens, labels))
generated_tokens = generated_tokens.cpu().numpy()
labels = labels.cpu().numpy()
if args.ignore_pad_token_for_loss:
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
if isinstance(generated_tokens, tuple):
generated_tokens = generated_tokens[0]
decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
# If we are in a multiprocess environment, the last batch has duplicates
if accelerator.num_processes > 1:
if step == len(eval_dataloader):
decoded_preds = decoded_preds[: len(eval_dataloader.dataset) - samples_seen]
decoded_labels = decoded_labels[: len(eval_dataloader.dataset) - samples_seen]
else:
samples_seen += decoded_labels.shape[0]
```
here, In the for loop, step will never equal to len(eval_dataloader), so here should be modified to `if step == len(eval_dataloader) - 1`
and
`samples_seen += decoded_labels.shape[0]`
decoded_labels is a list that produced by the postprocess_text(),
list object have no attribute shape.
GLHF
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
just run the examples scripts provided in the readme
### Expected behavior
```shell
samples_seen exceed the dataset size
and also the attribute error
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17200/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17199
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17199/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17199/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17199/events
|
https://github.com/huggingface/transformers/pull/17199
| 1,233,531,657
|
PR_kwDOCUB6oc43soB8
| 17,199
|
Faster implementation for SentencePieceExtractor
|
{
"login": "e-mon",
"id": 2805136,
"node_id": "MDQ6VXNlcjI4MDUxMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2805136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-mon",
"html_url": "https://github.com/e-mon",
"followers_url": "https://api.github.com/users/e-mon/followers",
"following_url": "https://api.github.com/users/e-mon/following{/other_user}",
"gists_url": "https://api.github.com/users/e-mon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-mon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-mon/subscriptions",
"organizations_url": "https://api.github.com/users/e-mon/orgs",
"repos_url": "https://api.github.com/users/e-mon/repos",
"events_url": "https://api.github.com/users/e-mon/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-mon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17199). All of your documentation changes will be reflected on that endpoint.",
"Please have a look and let me know your feedback. \r\n@sgugger, @LysandreJik, @patil-suraj, @n1t0",
"cc @Narsil and @SaulLu ",
"Hi @e-mon ,\r\n\r\nThanks for looking into this. \r\nIs this code used often ? This code was written hastily and was supposed to only run once (since we can save the `tokenizer.json` within the `tokenizers` library which should load again pretty fast.\r\n\r\nSince we're looking at optimizing this code I propose another version which seems even faster:\r\nhttps://gist.github.com/Narsil/a6b927c4973d4d0a63b1765cfff38e55\r\n\r\n(I am using a smaller vocab for faster testing, since the slow is really excruciatingly slow).\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Or shall we just parallize it?\r\n\r\n```\r\n# coding=utf-8\r\n# Copyright 2018 The HuggingFace Inc. team.\r\n#\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n#\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n#\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\"\"\"\r\nUtilities to convert slow tokenizers in their fast tokenizers counterparts.\r\n\r\nAll the conversions are grouped here to gather SentencePiece dependencies outside of the fast tokenizers files and\r\nallow to make our dependency on SentencePiece optional.\r\n\"\"\"\r\n\r\nimport warnings\r\nfrom typing import Dict, List, Tuple\r\n\r\nfrom tokenizers import AddedToken, Regex, Tokenizer, decoders, normalizers, pre_tokenizers, processors\r\nfrom tokenizers.models import BPE, Unigram, WordPiece\r\nimport multiprocessing\r\nfrom functools import partial\r\nfrom .utils import requires_backends\r\ndef merge_core(vocab,vocab_scores,piece_ls):\r\n merges = []\r\n for piece_l in piece_ls:\r\n for piece_r in vocab.keys():\r\n merge = f\"{piece_l}{piece_r}\"\r\n piece_score = vocab_scores.get(merge, None)\r\n if piece_score:\r\n merges += [(piece_l, piece_r, piece_score)]\r\n return merges;\r\n\r\n\r\nclass SentencePieceExtractor:\r\n \"\"\"\r\n Extractor implementation for SentencePiece trained models. https://github.com/google/sentencepiece\r\n \"\"\"\r\n\r\n def __init__(self, model: str):\r\n requires_backends(self, \"sentencepiece\")\r\n from sentencepiece import SentencePieceProcessor\r\n\r\n self.sp = SentencePieceProcessor()\r\n self.sp.Load(model)\r\n\r\n def extract(self, vocab_scores=None) -> Tuple[Dict[str, int], List[Tuple]]:\r\n \"\"\"\r\n By default will return vocab and merges with respect to their order, by sending `vocab_scores` we're going to\r\n order the merges with respect to the piece scores instead.\r\n \"\"\"\r\n sp = self.sp\r\n vocab = {sp.id_to_piece(index): index for index in range(sp.GetPieceSize())}\r\n if vocab_scores is not None:\r\n vocab_scores, reverse = dict(vocab_scores), True\r\n else:\r\n vocab_scores, reverse = vocab, False\r\n pool_obj = multiprocessing.Pool();\r\n merges=pool_obj.map(partial(merge_core,vocab,vocab_scores),vocab.keys())\r\n # Merges\r\n # merges = []\r\n # for piece_l in vocab.keys():\r\n # for piece_r in vocab.keys():\r\n # merge = f\"{piece_l}{piece_r}\"\r\n # piece_score = vocab_scores.get(merge, None)\r\n # if piece_score:\r\n # merges += [(piece_l, piece_r, piece_score)]\r\n merges = sorted([item for sublist in merges for item in sublist], key=lambda val: val[2], reverse=reverse);\r\n merges = [(val[0], val[1]) for val in merges]\r\n return vocab, merges\r\n\r\n```"
] | 1,652
| 1,682
| 1,655
|
NONE
| null |
# What does this PR do?
This PR is to improve `SentencePieceExtractor` extract method performance, which took several minutes when targeting vocabularies of tens of thousands of words.
For 44,876 words ( [repository](https://huggingface.co/rinna/japanese-gpt-1b) ), it used to take 290 seconds, but now it takes 0.2 seconds.
I've added a simple test, but let me know if you need anything else.
The experimental conditions and code are as follows.
result
```
vocabulary length: 44876, max word length: 16
normal: 290.2607123851776 secs
improved: 0.18401217460632324 secs
```
code
```python
import pickle
import time
from collections import defaultdict
from typing import List
vocab = pickle.load(open('vocab.pkl', 'rb'))
def normal(vocab: dict) -> List[str]:
merges = []
for piece_l in vocab.keys():
for piece_r in vocab.keys():
merge = f"{piece_l}{piece_r}"
piece_id = vocab.get(merge, None)
if piece_id:
merges += [(piece_l, piece_r, piece_id)]
return merges
def improved(vocab: dict) -> List[str]:
merges = []
prefixes = dict()
for word in vocab.keys():
for i in range(len(word)):
prefixes[word[: i + 1]] = {word} | prefixes.setdefault(word[: i + 1], set())
for word in vocab.keys():
if len(prefixes[word]) > 1:
for candidate in prefixes[word]:
if word != candidate:
if candidate[len(word) :] in vocab:
piece_id = vocab.get(candidate, None)
merges += [(word, candidate[len(word) :], piece_id)]
return merges
print(f'vocabulary length: {len(vocab)}, max word length: {max(len(word) for word in vocab.keys())}')
start = time.time()
result_normal = normal(vocab)
print(f'normal: {time.time() - start} secs')
start = time.time()
result_improved = improved(vocab)
print(f'improved: {time.time() - start} secs')
# confirm that results match
assert sorted(result_normal, key=lambda val: val[2]) == sorted(result_improved, key=lambda val: val[2])
```
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@n1t0, @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17199/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17199/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17199",
"html_url": "https://github.com/huggingface/transformers/pull/17199",
"diff_url": "https://github.com/huggingface/transformers/pull/17199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17199.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17198
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17198/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17198/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17198/events
|
https://github.com/huggingface/transformers/pull/17198
| 1,233,525,461
|
PR_kwDOCUB6oc43smvB
| 17,198
|
Fix contents in index.mdx to match docs' sidebar
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix: Currently the sections in the content part of `index.mdx` do not match the sections in `_toctree.yml`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17198/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17198",
"html_url": "https://github.com/huggingface/transformers/pull/17198",
"diff_url": "https://github.com/huggingface/transformers/pull/17198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17198.patch",
"merged_at": 1652341034000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17197
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17197/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17197/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17197/events
|
https://github.com/huggingface/transformers/pull/17197
| 1,233,504,544
|
PR_kwDOCUB6oc43sila
| 17,197
|
Fix minor style error in Spanish docs
|
{
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @sgugger ",
"Thank you very much @osanseviero! I was missing this part: `pip install -e \".[dev]\"`. Was really frustrating but a lesson learned. ๐"
] | 1,652
| 1,652
| 1,652
|
MEMBER
| null |
# What does this PR do?
This PR is a minor fix of style since CircleCI is red due to style. FYI I did this in clean environment
```
pip install hf-doc-builder -U
pip install -e ".[dev]"
make style
```
Part of #15947
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17197/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17197",
"html_url": "https://github.com/huggingface/transformers/pull/17197",
"diff_url": "https://github.com/huggingface/transformers/pull/17197.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17197.patch",
"merged_at": 1652338306000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17196
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17196/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17196/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17196/events
|
https://github.com/huggingface/transformers/pull/17196
| 1,233,481,771
|
PR_kwDOCUB6oc43sdv2
| 17,196
|
Log the decoder chosen by GenerationMixin
|
{
"login": "mapmeld",
"id": 643918,
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mapmeld",
"html_url": "https://github.com/mapmeld",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @mapmeld,\r\n\r\nThanks for the PR. \r\n\r\nTo me, that is a bit too much of an edge-case and I'm not very happy with cluttering the generation code with `if - else ` statements.\r\n\r\n@gante @patil-suraj what do you think? ",
"Hey @mapmeld! Thank you for the PR ๐ \r\n\r\nI'm also not a fan of all the `if` statements on a function whose complexity is already over the top. Perhaps we could remove all the `if` branches, keep the logging statements, but lower their logging level to `debug`. That way, a user could get all those values by setting the appropriate logging level, and it would be invisible in the vast majority of cases.\r\n\r\nWDYT?",
"@patrickvonplaten @gante That makes sense to me, log.debug level, no extra argument. I've made a commit for that",
"Agree with @gante 's comment, using `logger.debug` and getting rid of those if-else statements sounds good to me.\r\nI'm okay with having these loggings to make it more obvious which method is being used, will be useful in debugging IMO.",
"Sorry, I think I wasn't super clear in my last message. \r\n\r\nPersonally, I would prefer to not merge this PR because:\r\n- the generation code is already very complex and hard to read (talking about the code-reading part here not what's displayed to the user), don't think adding 5,6 new logger statement lines help here\r\n- How do users know that generate should be run in debug mode to display the logging statements - don't think many users will realize this\r\n- If the decoding strategy is not obvious, we should improve the docs IMO\r\n- If the user doesn't know what `top_p` does, I don't think she/he would know that a `TopPLogitsWarper` is -> don't see the added value of displaying the names in a logger here\r\n- Also not in line with how we use the logger in other places across the library",
"OK, will close then.\r\n\r\nIf I can suggest changes beyond logging to this section, here are some ideas: \r\n- throwing exceptions in the current code if a decoding argument (`typical_p`) is ignored because of an unusable value or missing companion argument (`do_sample=True`)\r\n- adding an argument to `generate()` naming the intended decoder, so it is clear in end-user code, and transformers can throw an exception for calls which don't go down the expected path for whatever reason\r\n- specific decoding functions to replace the general `generate()`, where these functions can throw exceptions / using Python type hints / be more useful in code auto-complete tools\r\n- implementing typical decoding in TensorFlow so there's more similarity between Torch and TensorFlow code",
"Thanks a lot @mapmeld - those are really nice suggestions! Also after some discussion we think it could make a lot of sense to do maybe the following:\r\n\r\n- If `kwargs` are passed to `generate` that don't exist than we throw a warning so a user is well aware if something is misspelled. \r\n- Really like the idea of warning the user if an argument is used that cannot be activated - wondering if there is a good approach that would not force us to make a lot of `if ....` statements in `generate`. Any ideas how this could be checked in a very concise way? \r\n\r\n",
"Also keen to hear suggestions from @gante :-) ",
"> implementing typical decoding in TensorFlow so there's more similarity between Torch and TensorFlow code\r\n\r\n(@mapmeld) Yeah, we are working on it :D TF generate should have a big release soon.\r\n\r\n> Really like the idea of warning the user if an argument is used that cannot be activated - wondering if there is a good approach that would not force us to make a lot of if .... statements in generate. Any ideas how this could be checked in a very concise way?\r\n\r\n(@patrickvonplaten) Without if's and else's, the cleanest solution would possibly be to hold some dictionary with all passed arguments, in addition to a set of accepted arguments for each generation type, and raise an exception with all unexpected arguments (e.g. `The passed arguments triggered greedy_search. However, for greedy_search, following arguments are not accepted: top_p. Please check the documentation here [link]`). We can actually implement it with a small effort -- the dictionary with all arguments is `locals()` at the start of the specific generation functions (e.g. `greedy_search()`) and the set of accepted arguments is the function signature except `**model_kwargs`. We can get the accepted `model_kwargs` from the model forward signature (it's not quite, but should be close enough) -- everything else that remains in `**model_kwards` is an unused parameter and should raise an exception.\r\n\r\nWDYT?",
"In a first step I was rather thinking about just warning the user if parameters are passed in `kwargs` that are not used (probs misspelled) ",
"Adding sub-generation specific logging logic sounds very complex, would be open if we find a clean, concise solution but at the moment I'd like to prevent adding hardcoded lists of which generation parameter is relevant for which sub generation method (also hard to maintain)",
"@gante the solution sounds interesting - would need to see a PR for it to fully understand it. The problem I see is that we won't detect unnecessary generation parameters since they are inside `logits_processor` and `logits_warper`",
"Overall, also just want to say here that IMO two mistakes were made a while back:\r\n\r\n- We've set defaults for some values which we should have never done IMO (`max_length` and `top_k`) have defaults which is quite counter productive for good logging\r\n- We have allowed people to set generation parameters inside the config to which the method defaults to - in the aftermath this was too much \"black-magic\" and not at all visible/understandable for (new) users.\r\n\r\nWill be very hard to remedy these things without breaking backward comp, but open to suggestions / comments!",
"Would it be possible for us to talk about it in the HF Slack? I would be interested in finding a part of this where I can contribute",
"Invited you :-) Let's chat on Slack"
] | 1,652
| 1,655
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
When calling `model.generate(content, log_decoder=True)`, the PR would log which decoder and warper(s) are actually used in generation.
I have a demo where I show text generated with different options (top_k, typical_p, repetition_penalty, num_beams, etc.). The final chosen decoding strategy is not obvious. It is tricky to test by comparing outputs because a generative model often returns different text on multiple runs.
By design the function tolerates mistakes -- if there is a missing arg (`typical_p=0.5` but no `do_sample=True`) or mismatched value (`typical_p=3`) or typo'd arg (`numBeams=2`) then the function silently chooses another decoding strategy. The code does not flag these because the remaining `**kwargs` are passed to the model.
I believe the logger is the best place to check whether decoding actually happened as expected.
Example usage: https://colab.research.google.com/drive/1DpMnZkSCtZIiaONoxfzYxYI4vgiTNYLN?usp=sharing
- ~~The first commit is unnecessary thanks to #17186~~ Rebased on this PR and adding one additional section to the documentation about typical decoding
- I'm open to renaming or removing `log_decoder` to always do `logger.info` in these places
- If we always do `logger.info`, I could move logger calls into `BeamSearchScorer`. Trying to avoid adding too many args
- Could use `logger.warn` if these issues warrant it
Discussion: https://discuss.huggingface.co/t/logging-which-decoder-selected-in-generation/18133
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17196/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17196",
"html_url": "https://github.com/huggingface/transformers/pull/17196",
"diff_url": "https://github.com/huggingface/transformers/pull/17196.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17196.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17195
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17195/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17195/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17195/events
|
https://github.com/huggingface/transformers/issues/17195
| 1,233,416,897
|
I_kwDOCUB6oc5JhHLB
| 17,195
|
Different logits for single/batch inputs on T5ForConditionalGeneration
|
{
"login": "rafikg",
"id": 13174842,
"node_id": "MDQ6VXNlcjEzMTc0ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13174842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafikg",
"html_url": "https://github.com/rafikg",
"followers_url": "https://api.github.com/users/rafikg/followers",
"following_url": "https://api.github.com/users/rafikg/following{/other_user}",
"gists_url": "https://api.github.com/users/rafikg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafikg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafikg/subscriptions",
"organizations_url": "https://api.github.com/users/rafikg/orgs",
"repos_url": "https://api.github.com/users/rafikg/repos",
"events_url": "https://api.github.com/users/rafikg/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafikg/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"@patrickvonplaten can you please have a look ?"
] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
transformers 4.18.0
python 3.8.10
ubuntu
pytorch
T5ForConditionalGeneration
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
model.resize_token_embeddings(len(tokenizer))
model.to("cuda")
model.eval()
# sequences
seq1 = "summarize: Calling the model (which means the forward method) uses the labels for teacher forcing. This means inputs to the decoder are the labels shifted by one"
output1 = "calling the model uses the labels for teacher forcing. inputs to the decoder"
seq2 = "summarize: When you call the generate method, the model is used in the autoregressive fashion"
output2 = "the model is used in the autoaggressive fashion."
seq3 = "summarize: However, selecting the token is a hard decision, and the gradient cannot be propagated through this decision"
output3 = "the token is a hard decision, and the gradient cannot be propagated through this decision"
input_sequences = [seq1, seq2, seq3]
output_seq = [output1, output2, output3]
# encoding input and attention mask
encoding = tokenizer.batch_encode_plus(
input_sequences,
padding="longest",
truncation=True,
return_tensors="pt",
)
input_ids, attention_mask = encoding.input_ids.to("cuda"), encoding.attention_mask.to("cuda")
# labels
target_encoding = tokenizer.batch_encode_plus(
output_seq, padding="longest", truncation=True, return_tensors="pt"
)
labels = target_encoding.input_ids.to("cuda")
labels[labels == tokenizer.pad_token_id] = -100
# Call the models
logits = model(input_ids=input_ids, labels=labels).logits
# Apply softamx() and batch_decode()
X = logits
X = F.softmax(X, dim=-1)
ids = X.argmax(dim=-1)
y = tokenizer.batch_decode(sequences=ids, skip_special_tokens=True)
print(y)
# results: batch_size=3
# [
# 'call the model uses the labels for teacher forcing inputs to the decoder are',
# 'model is used in the constructegressgressive fashion ',
# 'token can a token decision, and the gradient cannot be propagated through this decision '
# ]
# results: batch_size =1 i.e. consider 1 seq each time
# ['call the model uses the labels for teacher forcing inputs to the decoder are']
# ['the model is used in the auto-gressgressive fashion ']
# ['the token is a hard decision, and the gradient cannot be propagated through this decision ']
```
### Expected behavior
```shell
Having the same output sequences.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17195/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17194
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17194/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17194/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17194/events
|
https://github.com/huggingface/transformers/pull/17194
| 1,233,347,955
|
PR_kwDOCUB6oc43sB8G
| 17,194
|
Update data2vec.mdx to include a Colab Notebook link (that shows fine-tuning)
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, sure. Doing it in a while. ",
"@sgugger done."
] | 1,652
| 1,652
| 1,652
|
MEMBER
| null |
# What does this PR do?
This PR includes a link to a Colab Notebook that shows how to fine-tune the Data2Vec vision model on the task of image classification.
@sgugger @Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17194/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17194",
"html_url": "https://github.com/huggingface/transformers/pull/17194",
"diff_url": "https://github.com/huggingface/transformers/pull/17194.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17194.patch",
"merged_at": 1652365320000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17193
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17193/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17193/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17193/events
|
https://github.com/huggingface/transformers/issues/17193
| 1,233,332,674
|
I_kwDOCUB6oc5JgynC
| 17,193
|
[run_seq2seq_qa.py] various issues
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"No answer from @karthikrangasai, @LysandreJik - who would be a good person to look at these issues? Thank you.",
"I believe @patil-suraj has experience with similar scripts, would you like to have a look at this one when you have a minute, @patil-suraj ?",
"Great! Thank you for tagging Suraj, Lysandre! and thank you, Suraj for checking it",
"Is there any update on this?\r\n\r\nI am also trying to use the script. Additionally, I have found some more issues with this:\r\n\r\n1. It uses the doc_stride strategy to break long contexts, but at the end of the evaluation, no special handling is done and it seems it just takes into account the latest feature extracted from an example (which seems to be not a good approach)\r\n2. The post_process script is based on the feature set having an `example_id` column, but the Trainer hides that column, and the script breaks in that part. In the provided Colab, there is a tweak that \"resets\" the features dataset format for it to work. Maybe bringing this to this script?\r\n\r\n\r\n I hope this helps. It would be great to have a script for that ....",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,652
| 1,665
| 1,659
|
CONTRIBUTOR
| null |
I was trying to use `examples/pytorch/question-answering/run_seq2seq_qa.py` to write a test and run into multiple issues.
A. the example is not working:
https://github.com/huggingface/transformers/blob/d1d5ebb16cc8500a3e4e1b30047312cc563ca87f/examples/pytorch/question-answering/README.md#fine-tuning-t5-on-squad20
1. running as is fails with:
```
ValueError: --answer_column' value 'answer' needs to be one of: id, title, context, question, answers
```
The example should say `--answer_column answers` (not `answer`)
2. ok, trying to move forward:
```
$ python examples/pytorch/question-answering/run_seq2seq_qa.py --model_name_or_path t5-small --dataset_name squad_v2 --context_column context --question_column question --answer_column answers --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 1 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_seq2seq_squad/
```
crashes with:
```
05/11/2022 18:02:23 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/squad_v2/squad_v2/2.0.0/09187c73c1b837c95d9a249cd97c2c3f1cebada06efe667b4427714b27639b1d/cache-d9a027917b78cfa7.arrow
Running tokenizer on validation dataset: 0%| | 0/12 [00:00<?, ?ba/s]05/11/2022 18:02:23 - INFO - datasets.arrow_dataset - Caching processed dataset at /home/stas/.cache/huggingface/datasets/squad_v2/squad_v2/2.0.0/09187c73c1b837c95d9a249cd97c2c3f1cebada06efe667b4427714b27639b1d/cache-0b463497dc4250c6.arrow
Running tokenizer on validation dataset: 0%| | 0/12 [00:03<?, ?ba/s]
Traceback (most recent call last):
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 687, in <module>
main()
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 519, in main
eval_dataset = eval_examples.map(
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2346, in map
return self._map_single(
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 532, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 499, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2751, in _map_single
writer.write_batch(batch)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_writer.py", line 510, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
File "pyarrow/table.pxi", line 1702, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1314, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 4 named labels expected length 1007 but got length 1000
```
B. If I try to switch from `run_qa.py` to this script, e.g.:
```
python examples/pytorch/question-answering/run_seq2seq_qa.py --model_name_or_path valhalla/t5-base-squad --tokenizer_name valhalla/t5-base-squad --dataset_name squad --output_dir ./xxx --overwrite_output_dir --optim adafactor --do_train --max_train_samples 3 --do_eval --max_eval_samples 1 --logging_strategy steps --logging_steps 1 --evaluation_strategy steps --eval_steps 1 --save_strategy steps --save_steps 1 --load_best_model_at_end --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --num_train_epochs 1 --report_to none --fp16
```
it crashes on:
```
Traceback (most recent call last): | 0/1 [00:00<?, ?it/s]
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 687, in <module>
main()
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 623, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1317, in train
return inner_training_loop(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1629, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1801, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/mnt/nvme0/code/huggingface/transformers-master/examples/pytorch/question-answering/trainer_seq2seq_qa.py", line 71, in evaluate
eval_preds = self.post_process_function(eval_examples, eval_dataset, output)
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 580, in post_processing_function
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3287, in batch_decode
return [
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3288, in <listcomp>
self.decode(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3326, in decode
return self._decode(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_fast.py", line 547, in _decode
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
TypeError: 'list' object cannot be interpreted as an integer
```
basically:
```
preds == [[[nan, nan, ..., nan]]]
```
and there are 2 problems here:
1. it has one level too many of nesting - hence the error above
2. if I manually tweak it to pass `preds[0]` it then fails to deal with `nan` and then fails with:
```
Traceback (most recent call last):
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 687, in <module>
main()
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 623, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1317, in train
return inner_training_loop(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1629, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1801, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/mnt/nvme0/code/huggingface/transformers-master/examples/pytorch/question-answering/trainer_seq2seq_qa.py", line 71, in evaluate
eval_preds = self.post_process_function(eval_examples, eval_dataset, output)
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 580, in post_processing_function
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3287, in batch_decode
return [
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3288, in <listcomp>
self.decode(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3326, in decode
return self._decode(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_fast.py", line 548, in _decode
text = self._tokenizer.decode(token_ids[0], skip_special_tokens=skip_special_tokens)
TypeError: 'float' object cannot be interpreted as an integer
```
C. The test that exercises this script uses a local sample and it succeeds:
```
python examples/pytorch/question-answering/run_seq2seq_qa.py \
--model_name_or_path t5-small \
--context_column context \
--question_column question \
--answer_column answers \
--version_2_with_negative \
--train_file tests/fixtures/tests_samples/SQUAD/sample.json \
--validation_file tests/fixtures/tests_samples/SQUAD/sample.json \
--output_dir /tmp/debug_seq2seq_squad/ \
--overwrite_output_dir \
--max_steps=10 \
--warmup_steps=2 \
--do_train \
--do_eval \
--learning_rate=2e-4 \
--per_device_train_batch_size=2 \
--per_device_eval_batch_size=1 \
--predict_with_generate
```
but once switching to `--dataset_name squad_v2` it breaks.
Environment-wise I'm using all the latest versions of datasets, transformers, etc. Please let me know if you need any specific versions of anything if you can't reproduce those issues.
### Who can help?
Tagging @karthikrangasai who created this script, but of course if others know how to fix it please don't hesitate to step in. Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17193/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/17193/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17192
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17192/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17192/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17192/events
|
https://github.com/huggingface/transformers/pull/17192
| 1,233,236,884
|
PR_kwDOCUB6oc43rqjq
| 17,192
|
Remove duplicated os.path.join in Trainer._load_rng_state
|
{
"login": "shijie-wu",
"id": 2987758,
"node_id": "MDQ6VXNlcjI5ODc3NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shijie-wu",
"html_url": "https://github.com/shijie-wu",
"followers_url": "https://api.github.com/users/shijie-wu/followers",
"following_url": "https://api.github.com/users/shijie-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions",
"organizations_url": "https://api.github.com/users/shijie-wu/orgs",
"repos_url": "https://api.github.com/users/shijie-wu/repos",
"events_url": "https://api.github.com/users/shijie-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shijie-wu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Remove duplicated os.path.join in `Trainer._load_rng_state`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17192/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17192",
"html_url": "https://github.com/huggingface/transformers/pull/17192",
"diff_url": "https://github.com/huggingface/transformers/pull/17192.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17192.patch",
"merged_at": 1652315313000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17191
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17191/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17191/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17191/events
|
https://github.com/huggingface/transformers/issues/17191
| 1,233,228,529
|
I_kwDOCUB6oc5JgZLx
| 17,191
|
Mistake in the BART doc & inconsistency between code & doc
|
{
"login": "JulesGM",
"id": 3231217,
"node_id": "MDQ6VXNlcjMyMzEyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3231217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JulesGM",
"html_url": "https://github.com/JulesGM",
"followers_url": "https://api.github.com/users/JulesGM/followers",
"following_url": "https://api.github.com/users/JulesGM/following{/other_user}",
"gists_url": "https://api.github.com/users/JulesGM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JulesGM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JulesGM/subscriptions",
"organizations_url": "https://api.github.com/users/JulesGM/orgs",
"repos_url": "https://api.github.com/users/JulesGM/repos",
"events_url": "https://api.github.com/users/JulesGM/events{/privacy}",
"received_events_url": "https://api.github.com/users/JulesGM/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"One could fix this by putting\r\n```python\r\nif attention_mask is None:\r\n attention_mask = input_ids != self.config.pad_token_id\r\n```\r\nsomewhere",
"Hi @JulesGM ! Thanks for reporting this, the doc should be changed to mentioned the `_prepare_decoder_attention_mask` method. But note that, the `_prepare_decoder_attention_mask` prepares the casual mask for the decoder and then combines it with the `decoder_attention_mask` if it's passed by the user. The causal mask is not meant to ignore padding tokens hence it doesn't look at the `decoder_input_ids`.\r\n\r\nAlso, we don't automatically prepare `decoder_attention_mask` because we can't always assume that the user wants to mask padding tokens. So the user should pass it, if he/she wants it.",
"Great thank. I mentioned that because the doc says it did. Just FYI, other\nmodels also have references to the function that doesn't exist.\nI wish there was an easy way to ignore pad tokens in decoder input, to be\nable to condition on some already generated text in the decoder,\nlike scratchpads https://arxiv.org/pdf/2112.00114.pdf. Right now it's very\nhard, I have to write a custom way to cache the previous positions to only\nincrement the positional encoders the proper amount with caching.\n\n\nOn Thu, May 12, 2022 at 9:05 AM Suraj Patil ***@***.***>\nwrote:\n\n> Hi @JulesGM <https://github.com/JulesGM> ! Thanks for reporting this, the\n> doc should be changed to mentioned the _prepare_decoder_attention_mask\n> method. But note that, the _prepare_decoder_attention_mask prepares the\n> casual mask for the decoder and then combines it with the\n> decoder_attention_mask if it's passed by the user. The causal mask is not\n> meant to ignore padding tokens hence it doesn't look at the\n> decoder_input_ids.\n>\n> Also, we don't automatically prepare decoder_attention_mask because we\n> can't always assume that the user wants to mask padding tokens. So the user\n> should pass it, if he/she wants it.\n>\n> โ\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/17191#issuecomment-1124969687>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAYU34MU4AOV42UGDXLCSR3VJT625ANCNFSM5VWKKPAQ>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n"
] | 1,652
| 1,652
| 1,652
|
NONE
| null |
Version: Most recent in the Github repo.
Model: BART.
Description:
https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L627 says to look at `modeling_bart._prepare_decoder_inputs` to modify the default behavior of the way `decoder_attention_mask` behaves by default, but there is no `_prepare_decoder_inputs` anywhere in the Huggingface Transformers repository. I guess it's an artifact from a previous version, and that the function is now called `_prepare_decoder_attention_mask`. However, this method doesn't seem to look at the values of the decoder inputs anywhere, so I don't think it does what the doc says, ie mask the pad tokens. Or is this done somewhere else?
Thanks.
### Who can help?
@patil-suraj
### Expected behavior
```shell
Mask pad tokens by default in the decoder.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17191/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17190
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17190/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17190/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17190/events
|
https://github.com/huggingface/transformers/pull/17190
| 1,233,181,193
|
PR_kwDOCUB6oc43rehB
| 17,190
|
Fix numpy VisibleDeprecationWarning for question answering pipeline.
|
{
"login": "mygithubid1",
"id": 19863166,
"node_id": "MDQ6VXNlcjE5ODYzMTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/19863166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mygithubid1",
"html_url": "https://github.com/mygithubid1",
"followers_url": "https://api.github.com/users/mygithubid1/followers",
"following_url": "https://api.github.com/users/mygithubid1/following{/other_user}",
"gists_url": "https://api.github.com/users/mygithubid1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mygithubid1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mygithubid1/subscriptions",
"organizations_url": "https://api.github.com/users/mygithubid1/orgs",
"repos_url": "https://api.github.com/users/mygithubid1/repos",
"events_url": "https://api.github.com/users/mygithubid1/events{/privacy}",
"received_events_url": "https://api.github.com/users/mygithubid1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17128 .
`VisibleDeprecationWarning` is addressed by specifying `dtype=object` when creating numpy array. [This post](https://forums.fast.ai/t/visibledeprecationwarning-creating-an-ndarray-from-ragged-nested-sequences-is-deprecated/81774/3), provides a bit more context into what the warning means.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Here's the [link](https://github.com/huggingface/transformers/issues/17128) .
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik, @n1t0
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17190/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17190",
"html_url": "https://github.com/huggingface/transformers/pull/17190",
"diff_url": "https://github.com/huggingface/transformers/pull/17190.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17190.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17189
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17189/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17189/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17189/events
|
https://github.com/huggingface/transformers/issues/17189
| 1,233,102,452
|
I_kwDOCUB6oc5Jf6Z0
| 17,189
|
Fine tunning error in /models/t5/modeling_t5.py
|
{
"login": "ZHM-Sesame",
"id": 17838546,
"node_id": "MDQ6VXNlcjE3ODM4NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/17838546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZHM-Sesame",
"html_url": "https://github.com/ZHM-Sesame",
"followers_url": "https://api.github.com/users/ZHM-Sesame/followers",
"following_url": "https://api.github.com/users/ZHM-Sesame/following{/other_user}",
"gists_url": "https://api.github.com/users/ZHM-Sesame/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZHM-Sesame/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZHM-Sesame/subscriptions",
"organizations_url": "https://api.github.com/users/ZHM-Sesame/orgs",
"repos_url": "https://api.github.com/users/ZHM-Sesame/repos",
"events_url": "https://api.github.com/users/ZHM-Sesame/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZHM-Sesame/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThe `run_glue.py` script is only meant to work out-of-the-box for encoder-only Transformers, such as BERT, RoBERTa, DistilBERT, DeBERTa, etc.\r\n\r\nT5 is an encoder-decoder model, and would require several changes to the script.",
"> \r\nI see. \r\n\r\nThank you."
] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
Keeps getting this error:
"ValueError: not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "run_glue_t5.py", line 591, in <module>
main()
File "run_glue_t5.py", line 509, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1400, in train
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1984, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2016, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1149, in _call_impl
result = forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1635, in forward
decoder_outputs = self.decoder(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1149, in _call_impl
result = forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 933, in forward
batch_size, seq_length = input_shape
ValueError: not enough values to unpack (expected 2, got 1)
0%| | 0/3796 [00:03<?, ?it/s]
2022-05-11 19:21:02,332 sagemaker-training-toolkit ERROR Reporting training FAILURE
2022-05-11 19:21:02,333 sagemaker-training-toolkit ERROR ExecuteUserScriptError:
ExitCode 1
ErrorMessage "ValueError: not enough values to unpack (expected 2, got 1)
0%| | 0/3796 [00:03<?, ?it/s]"
Command "/opt/conda/bin/python3.8 run_glue_t5.py --do_train True --learning_rate 2e-05 --max_seq_length 128 --model_name_or_path t5-small --num_train_epochs 1 --output_dir /opt/ml/model/t5_small --per_device_train_batch_size 64 --train_file /opt/ml/input/data/train/train.csv --validation_file /opt/ml/input/data/val/val.csv"
2022-05-11 19:21:02,333 sagemaker-training-toolkit ERROR Encountered exit_code 1
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
for using t5 series, like 't5-small', the model definition in run_glue.py should be:
model = MT5ForConditionalGeneration.from_pretrained(...)
instead of
model = AutoModelForSequenceClassification.from_pretrained(...) #this line couldn't work
after change the above line, still coudln't work for the "ValueError: not enough values to unpack (expected 2, got 1)" from the linke : `batch_size, seq_length = input_shape` in modeling_t5.py.
```
import sagemaker
from sagemaker.huggingface import HuggingFace
hyperparameters = {
'model_name_or_path':'t5-small',
'output_dir':'/opt/ml/model/t5_small',
'max_seq_length':128,
'per_device_train_batch_size' : 64,
'learning_rate' : 2e-5,
'num_train_epochs': 1,
'do_train': True,
'train_file': '/opt/ml/input/data/train/train.csv',
'validation_file': '/opt/ml/input/data/val/val.csv',
}
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.17.0'}
huggingface_estimator = HuggingFace(
entry_point='run_glue.py',
source_dir='./examples/pytorch/text-classification',
instance_type='ml.g5.xlarge',
instance_count=1,
role=role,
it_config=git_config,
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
hyperparameters = hyperparameters
)
```
### Expected behavior
```shell
start trainning
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17189/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17188
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17188/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17188/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17188/events
|
https://github.com/huggingface/transformers/pull/17188
| 1,233,077,022
|
PR_kwDOCUB6oc43rIlH
| 17,188
|
add shift_tokens_right in mT5
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
MEMBER
| null |
# What does this PR do?
Adds the missing `shift_tokens_right` in FlaxMT5.
Fixes #15771
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17188/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17188",
"html_url": "https://github.com/huggingface/transformers/pull/17188",
"diff_url": "https://github.com/huggingface/transformers/pull/17188.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17188.patch",
"merged_at": 1652297501000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17187
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17187/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17187/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17187/events
|
https://github.com/huggingface/transformers/pull/17187
| 1,233,064,525
|
PR_kwDOCUB6oc43rF7A
| 17,187
|
Remove columns before passing to data collator
|
{
"login": "Yard1",
"id": 10364161,
"node_id": "MDQ6VXNlcjEwMzY0MTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/10364161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yard1",
"html_url": "https://github.com/Yard1",
"followers_url": "https://api.github.com/users/Yard1/followers",
"following_url": "https://api.github.com/users/Yard1/following{/other_user}",
"gists_url": "https://api.github.com/users/Yard1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yard1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yard1/subscriptions",
"organizations_url": "https://api.github.com/users/Yard1/orgs",
"repos_url": "https://api.github.com/users/Yard1/repos",
"events_url": "https://api.github.com/users/Yard1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yard1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sgugger",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Removes columns before they are passed to the data collator in the non `datasets.Dataset` case.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17187/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17187",
"html_url": "https://github.com/huggingface/transformers/pull/17187",
"diff_url": "https://github.com/huggingface/transformers/pull/17187.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17187.patch",
"merged_at": 1652299113000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17186
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17186/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17186/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17186/events
|
https://github.com/huggingface/transformers/pull/17186
| 1,232,915,804
|
PR_kwDOCUB6oc43qmtZ
| 17,186
|
docs for typical decoding
|
{
"login": "jadermcs",
"id": 7156771,
"node_id": "MDQ6VXNlcjcxNTY3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7156771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jadermcs",
"html_url": "https://github.com/jadermcs",
"followers_url": "https://api.github.com/users/jadermcs/followers",
"following_url": "https://api.github.com/users/jadermcs/following{/other_user}",
"gists_url": "https://api.github.com/users/jadermcs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jadermcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jadermcs/subscriptions",
"organizations_url": "https://api.github.com/users/jadermcs/orgs",
"repos_url": "https://api.github.com/users/jadermcs/repos",
"events_url": "https://api.github.com/users/jadermcs/events{/privacy}",
"received_events_url": "https://api.github.com/users/jadermcs/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds a description for the typical_p parameter in #15504, as the docs for this parameter was missing.
@cimeister
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17186/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17186/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17186",
"html_url": "https://github.com/huggingface/transformers/pull/17186",
"diff_url": "https://github.com/huggingface/transformers/pull/17186.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17186.patch",
"merged_at": 1652894323000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17185
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17185/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17185/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17185/events
|
https://github.com/huggingface/transformers/issues/17185
| 1,232,882,878
|
I_kwDOCUB6oc5JfEy-
| 17,185
|
Unable to retrieve layers from model in tensorflow
|
{
"login": "old-school-kid",
"id": 56781123,
"node_id": "MDQ6VXNlcjU2NzgxMTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/56781123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/old-school-kid",
"html_url": "https://github.com/old-school-kid",
"followers_url": "https://api.github.com/users/old-school-kid/followers",
"following_url": "https://api.github.com/users/old-school-kid/following{/other_user}",
"gists_url": "https://api.github.com/users/old-school-kid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/old-school-kid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/old-school-kid/subscriptions",
"organizations_url": "https://api.github.com/users/old-school-kid/orgs",
"repos_url": "https://api.github.com/users/old-school-kid/repos",
"events_url": "https://api.github.com/users/old-school-kid/events{/privacy}",
"received_events_url": "https://api.github.com/users/old-school-kid/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Unfortunately, this method probably won't work for our models, because we implement the core of the model as a `MainLayer` class, and so the actual `Model` generally only has one \"layer\". In addition, our models and layers are implemented by subclassing, which means the order of the layers is not well-defined.\r\n\r\nIf you want to access sub-layers, you'll need to use [the actual Python structure of the class](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_tf_roberta.py#L912), rather than Keras methods. So `model.roberta.embeddings` will give you the embedding layer, and `model.roberta.encoder.layer` will give you a list of the other model layers. [Depending on your model](https://github.com/huggingface/transformers/blob/a42242da7c44d64c66a878cca65bc86dd3f626af/src/transformers/models/roberta/modeling_tf_roberta.py#L587-L590), there may also be a `model.roberta.pooler`.",
"Ah!\r\nThanks for the clarification!"
] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
transformers version: 4.18.0
platform: Google Colab
python version: 3.7.13
```
### Who can help?
@Rocketknight1
I am training a Roberta-large model for a classification task and I am using pre-trained model to start with. But for my task I want to freeze the embedding layer and the first few encoding layers, so that I can fine-tune the attention weights of the last few encoding layers. But I cannot access the layers while using tensorflow.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import RobertaTokenizer, TFRobertaModel
import tensorflow as tf
model = TFRobertaModel.from_pretrained('roberta-large')
model.get_layer(2)
### Expected behavior
```shell
This should have returned a layer instance but rather throws error
`ValueError: Was asked to retrieve layer at index 10 but model only has 1 layers.`
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17185/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17184
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17184/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17184/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17184/events
|
https://github.com/huggingface/transformers/issues/17184
| 1,232,848,303
|
I_kwDOCUB6oc5Je8Wv
| 17,184
|
Forward outputs on multiple sequences is wrong
|
{
"login": "rafikg",
"id": 13174842,
"node_id": "MDQ6VXNlcjEzMTc0ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13174842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafikg",
"html_url": "https://github.com/rafikg",
"followers_url": "https://api.github.com/users/rafikg/followers",
"following_url": "https://api.github.com/users/rafikg/following{/other_user}",
"gists_url": "https://api.github.com/users/rafikg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafikg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafikg/subscriptions",
"organizations_url": "https://api.github.com/users/rafikg/orgs",
"repos_url": "https://api.github.com/users/rafikg/repos",
"events_url": "https://api.github.com/users/rafikg/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafikg/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,652
| 1,652
| 1,652
|
NONE
| null |
### System Info
```shell
latest version of transformers
pytorch
python 3.10
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
model.resize_token_embeddings(len(tokenizer))
model.to("cuda")
model.eval()
# sequences
seq1 = "summarize: Calling the model (which means the forward method) uses the labels for teacher forcing. This means inputs to the decoder are the labels shifted by one"
output1 = "calling the model uses the labels for teacher forcing. inputs to the decoder"
seq2 = "summarize: When you call the generate method, the model is used in the autoregressive fashion"
output2 = "the model is used in the auto-aggressive fashion."
seq3 = "summarize: However, selecting the token is a hard decision, and the gradient cannot be propagated through this decision"
output3 = "the token is a hard decision, and the gradient cannot be propagated through this decision"
input_sequences = [seq1, seq2, seq3]
output_seq = [output1, output2, output3]
# encoding input and attention mask
encoding = tokenizer(
input_sequences,
padding="longest",
max_length=128,
truncation=True,
return_tensors="pt",
)
input_ids, attention_mask = encoding.input_ids.to("cuda"), encoding.attention_mask.to("cuda")
# labels
target_encoding = tokenizer(
output_seq, padding="longest", max_length=128, truncation=True
)
labels = target_encoding.input_ids
labels = torch.tensor(labels).to("cuda")
labels[labels == tokenizer.pad_token_id] = -100
# Call the models
logits = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).logits
# Apply softamx() and batch_decode()
X = logits
X = F.softmax(X, dim=-1)
ids = X.argmax(dim=-1)
y = tokenizer.batch_decode(sequences=ids, skip_special_tokens=True)
# results: batch_size=3
['call the model uses the labels for teacher forcing inputs to the decoder are',
'the model is used in the auto-aggressive fashion the the the',
'the token is a hard decision, and the gradient cannot be propagated through this decision ']
# results: batch_size =1 i.e. consider 1 seq each time
['call the model uses the labels for teacher forcing inputs to the decoder are']
['the model is used in the auto-aggressive fashion ']
['the token is a hard decision, and the gradient cannot be propagated through this decision ']
```
### Expected behavior
```shell
running model on a batch should give the same result as running on a single sequence
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17184/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17183
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17183/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17183/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17183/events
|
https://github.com/huggingface/transformers/pull/17183
| 1,232,842,736
|
PR_kwDOCUB6oc43qXrK
| 17,183
|
Add onnx export cuda support
|
{
"login": "JingyaHuang",
"id": 44135271,
"node_id": "MDQ6VXNlcjQ0MTM1Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/44135271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JingyaHuang",
"html_url": "https://github.com/JingyaHuang",
"followers_url": "https://api.github.com/users/JingyaHuang/followers",
"following_url": "https://api.github.com/users/JingyaHuang/following{/other_user}",
"gists_url": "https://api.github.com/users/JingyaHuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JingyaHuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JingyaHuang/subscriptions",
"organizations_url": "https://api.github.com/users/JingyaHuang/orgs",
"repos_url": "https://api.github.com/users/JingyaHuang/repos",
"events_url": "https://api.github.com/users/JingyaHuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/JingyaHuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Great, thanks for adding this @JingyaHuang!\r\n> \r\n> If I understand correctly, this enables tracing half-precision models?\r\n\r\nHi @michaelbenayoun , \r\nYes, but only for PyTorch since `tf2onnx` has [specified the device to be CPU](https://github.com/onnx/tensorflow-onnx/blob/main/tf2onnx/convert.py#L609).",
"> Thanks for iterating on this @JingyaHuang !\r\n> \r\n> I've left a few final nits, but this is looking really nice :)\r\n> \r\n> Could you please confirm that the slow tests pass on both CPU and GPU devices?\r\n> \r\n> ```\r\n> RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py\r\n> ```\r\n\r\nHi @lewtun , by running the slow tests on CPU and GPU, I got the following results. It seems that some models and tasks failed. Trying to find out the root of the problems now.\r\n```\r\n======================================================================================== short test summary info ========================================================================================\r\nFAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_12_bert_next_sentence_prediction - ValueError: next-sentence-prediction is not a supported task, supported tasks: dict_ke...\r\nFAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_71_mobilebert_next_sentence_prediction - ValueError: next-sentence-prediction is not a supported task, supported tasks: d...\r\nFAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_12_bert_next_sentence_prediction - ValueError: next-sentence-prediction is not a supported task, supported tasks:...\r\nFAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_20_big_bird_question_answering - AssertionError: big-bird, question-answering -> Expected all tensors to be on th...\r\nFAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_71_mobilebert_next_sentence_prediction - ValueError: next-sentence-prediction is not a supported task, supported ...\r\n========================================================== 5 failed, 177 passed, 77 skipped, 43 deselected, 158 warnings in 2478.21s (0:41:18) ==========================================================\r\n```",
"Oh yes, we recently reverted the next-sentence-prediction feature in #17276, so rebasing on `main` should fix those. The BigBird error looks more related to your PR, so let me know if you need some help debugging it :)",
"> Oh yes, we recently reverted the next-sentence-prediction feature in #17276, so rebasing on `main` should fix those. The BigBird error looks more related to your PR, so let me know if you need some help debugging it :)\r\n\r\nHi @lewtun , thanks for the details. After rebasing, all checks for bert passed. And the problem of big bird comes from a [bug in the modeling](https://github.com/huggingface/transformers/blob/main/src/transformers/models/big_bird/modeling_big_bird.py#L3102), that while creating the `token_type_ids`, the device is not specified lead to a mismatch of devices. I just fixed that. Now all checks of `pytorch_export` either on CPU or on CUDA passed.",
"> After rebasing, all checks for bert passed.\r\n\r\nCool! Just to double-check, did you run the tests:\r\n\r\n* On a CPU machine (no GPU, CUDA installed)\r\n* On a GPU machine\r\n\r\nI'd like to be sure we don't accidentally break the test suite for developers coding on CPU machines :)"
] | 1,652
| 1,653
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
* Add CUDA support for `transformers.onnx.export_pytorch`.
* Add test for `transformers.onnx.export_pytorch` on CUDA.
# Context
While executing `optimum.ORTTrainer` with `--deepspeed` and `--fp16` enabled, the export to onnx will fail since all layers of the models are not implemented for half-precision. Need to trace on CUDA as workaround.
## Who can review?
@michaelbenayoun @lewtun
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17183/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17183",
"html_url": "https://github.com/huggingface/transformers/pull/17183",
"diff_url": "https://github.com/huggingface/transformers/pull/17183.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17183.patch",
"merged_at": 1652889133000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17182
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17182/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17182/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17182/events
|
https://github.com/huggingface/transformers/pull/17182
| 1,232,747,260
|
PR_kwDOCUB6oc43qD26
| 17,182
|
ViT and Swin symbolic tracing with torch.fx
|
{
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
MEMBER
| null |
# What does this PR do?
This PR adds support for ViT and Swin symbolic tracing with torch.fx.
Fixes #16320
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17182/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17182",
"html_url": "https://github.com/huggingface/transformers/pull/17182",
"diff_url": "https://github.com/huggingface/transformers/pull/17182.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17182.patch",
"merged_at": 1652344948000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17181
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17181/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17181/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17181/events
|
https://github.com/huggingface/transformers/pull/17181
| 1,232,730,446
|
PR_kwDOCUB6oc43qASe
| 17,181
|
Fix LED documentation
|
{
"login": "manuelciosici",
"id": 51477,
"node_id": "MDQ6VXNlcjUxNDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/51477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manuelciosici",
"html_url": "https://github.com/manuelciosici",
"followers_url": "https://api.github.com/users/manuelciosici/followers",
"following_url": "https://api.github.com/users/manuelciosici/following{/other_user}",
"gists_url": "https://api.github.com/users/manuelciosici/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manuelciosici/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manuelciosici/subscriptions",
"organizations_url": "https://api.github.com/users/manuelciosici/orgs",
"repos_url": "https://api.github.com/users/manuelciosici/repos",
"events_url": "https://api.github.com/users/manuelciosici/events{/privacy}",
"received_events_url": "https://api.github.com/users/manuelciosici/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes several typos and formatting issues in docstrings.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR, but @sgugger is probably well-suited.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17181/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17181",
"html_url": "https://github.com/huggingface/transformers/pull/17181",
"diff_url": "https://github.com/huggingface/transformers/pull/17181.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17181.patch",
"merged_at": 1652289470000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17180
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17180/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17180/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17180/events
|
https://github.com/huggingface/transformers/issues/17180
| 1,232,637,786
|
I_kwDOCUB6oc5JeI9a
| 17,180
|
ValueError: The tokens {'null'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'null'} in the decoder's alphabet.
|
{
"login": "erdoganensar",
"id": 67780763,
"node_id": "MDQ6VXNlcjY3NzgwNzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/67780763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erdoganensar",
"html_url": "https://github.com/erdoganensar",
"followers_url": "https://api.github.com/users/erdoganensar/followers",
"following_url": "https://api.github.com/users/erdoganensar/following{/other_user}",
"gists_url": "https://api.github.com/users/erdoganensar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erdoganensar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erdoganensar/subscriptions",
"organizations_url": "https://api.github.com/users/erdoganensar/orgs",
"repos_url": "https://api.github.com/users/erdoganensar/repos",
"events_url": "https://api.github.com/users/erdoganensar/events{/privacy}",
"received_events_url": "https://api.github.com/users/erdoganensar/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @erdoganensar,\r\n\r\nCould you provide a code snippet that shows how I can reproduce the error?",
"I have a very similar problem related to this issue using HuBERT large on English corpus. Here is my code snippet:\r\n\r\n```\r\nprocessor = Wav2Vec2Processor.from_pretrained(\"facebook/hubert-large-ls960-ft\")\r\nvocab_dict = processor.tokenizer.get_vocab()\r\nsorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}\r\n\r\ndecoder = build_ctcdecoder(\r\n labels=list(sorted_vocab_dict.keys()),\r\n kenlm_model_path=\"some_3gram_correct.arpa\",\r\n)\r\nprocessor_with_lm = Wav2Vec2ProcessorWithLM(\r\n feature_extractor=processor.feature_extractor,\r\n tokenizer=processor.tokenizer,\r\n decoder=decoder\r\n)\r\n```\r\n\r\nHere is the first 20 lines of the `some_3gram_correct.arpa`:\r\n\r\n```\r\n\\data\\\r\nngram 1=759\r\nngram 2=3580\r\nngram 3=5747\r\n\r\n\\1-grams:\r\n-0.77224493\t<unk>\r\n-inf\t<s>\t-0.96890455\r\n-inf\t</s>\t-0.96890455\r\n-1.0275165\t</s>\r\n-1.8815907\tit\t-0.576264\r\n-2.4739406\tlooks\t-0.47474432\r\n-2.598626\tlike\t-0.19285807\r\n-1.7276717\ta\t-0.42839557\r\n-2.9495435\tnice\t-0.13558547\r\n-2.9495435\tday\t-0.3788458\r\n-2.122181\toutside\t-0.6023683\r\n-1.761344\tthat\t-0.4610282\r\n-2.0622265\t's\t-0.32115284\r\n-2.6496518\tabout\t-0.41595408\r\n```\r\n\r\nHere is the error message I got:\r\n\r\n`ValueError: The tokens {'H', 'Y', 'Q', 'M', 'D', 'I', 'F', 'P', 'J', 'V', 'X', 'B', 'C', 'U', 'E', 'S', 'N', 'R', 'Z', 'L', 'T', 'K', 'A', 'G', 'O', 'W'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'H', 'Y', 'Q', 'M', 'D', 'I', 'F', 'P', 'J', 'V', 'X', 'B', 'C', 'U', 'E', 'S', 'N', 'R', 'Z', 'L', 'T', 'K', 'A', 'G', 'O', 'W'} in the decoder's alphabet.`\r\n\r\nHow should proceed? Thanks in advance.",
"@erdoganensar note that the problem here is that the tokenizer's vocab has upper-case letters but the decoder has lowercase letters. Now from your 3gram it looks like the decoder should indeed have lowercase letters. So what you should do here is the following before running the above code snippet:\r\n\r\n```python\r\nprocessor = Wav2Vec2Processor.from_pretrained(\"facebook/hubert-large-ls960-ft\")\r\ntokenizer_vocab_dict = processor.tokenizer.get_vocab()\r\ntokenizer_vocab_lowercase = {k.lower(): v for k,v in tokenizer_vocab_dict.items()}\r\n\r\nvocab_file = \"vocab.json\"\r\nwith open(vocab_file, \"w\", encoding=\"utf-8\") as f: \r\n f.write(json.dumps(tokenizer_vocab_lowercase, ensure_ascii=False))\r\n\r\nprocessor.tokenizer = Wav2Vec2CTCTokenizer(vocab_file)\r\nprocessor.save_pretrained(\"path/to/processor\")\r\n```\r\n\r\nHaving done this you can execute the following code which should then work correctly:\r\n\r\n```python\r\nprocessor = Wav2Vec2Processor.from_pretrained(\"path/to/processor\")\r\nvocab_dict = processor.tokenizer.get_vocab()\r\nsorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}\r\n\r\ndecoder = build_ctcdecoder(\r\n labels=list(sorted_vocab_dict.keys()),\r\n kenlm_model_path=\"some_3gram_correct.arpa\",\r\n)\r\nprocessor_with_lm = Wav2Vec2ProcessorWithLM(\r\n feature_extractor=processor.feature_extractor,\r\n tokenizer=processor.tokenizer,\r\n decoder=decoder\r\n)\r\n```",
"@patrickvonplaten Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,652
| 1,655
| 1,655
|
NONE
| null |
### System Info
```shell
Hi @patrickvonplaten I got the same error as you mentioned above. I did what you said but I still get an error like below. Can you please help?
ValueError: The tokens {'null'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'null'} in the decoder's alphabet.
My alphabet.json:
{"labels": ["", "", "", "\u2047", " ", "'", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "\u00e7", "\u00f6", "\u00fc", "\u011f", "\u0131", "\u015f"], "is_bpe": false}
my vocab.json:
{"[PAD]": 0, "": 1, "": 2, "[UNK]": 3, "|": 4, "'": 5, "a": 6, "b": 7, "c": 8, "d": 9, "e": 10, "f": 11, "g": 12, "h": 13, "i": 14, "j": 15, "k": 16, "l": 17, "m": 18, "n": 19, "o": 20, "p": 21, "q": 22, "r": 23, "s": 24, "t": 25, "u": 26, "v": 27, "w": 28, "x": 29, "y": 30, "z": 31, "รง": 32, "รถ": 33, "รผ": 34, "ฤ": 35, "ฤฑ": 36, "ล": 37}
my added_tokens.json:
{}
my special_tokens_map.json:
{"bos_token": "null", "eos_token": "null", "unk_token": "[UNK]", "pad_token": "[PAD]"}
my tokenizer_config.json:
{"unk_token": "[UNK]", "bos_token": "null", "eos_token": "null", "pad_token": "[PAD]", "do_lower_case": false, "word_delimiter_token": "|", "replace_word_delimiter_char": " ", "special_tokens_map_file": null, "name_or_path": "model/checkpoint-6000", "tokenizer_class": "Wav2Vec2CTCTokenizer", "processor_class": "Wav2Vec2ProcessorWithLM"}
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi @patrickvonplaten I got the same error as you mentioned above. I did what you said but I still get an error like below. Can you please help?
ValueError: The tokens {'null'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'null'} in the decoder's alphabet.
My alphabet.json:
{"labels": ["", "", "", "\u2047", " ", "'", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "\u00e7", "\u00f6", "\u00fc", "\u011f", "\u0131", "\u015f"], "is_bpe": false}
my vocab.json:
{"[PAD]": 0, "": 1, "": 2, "[UNK]": 3, "|": 4, "'": 5, "a": 6, "b": 7, "c": 8, "d": 9, "e": 10, "f": 11, "g": 12, "h": 13, "i": 14, "j": 15, "k": 16, "l": 17, "m": 18, "n": 19, "o": 20, "p": 21, "q": 22, "r": 23, "s": 24, "t": 25, "u": 26, "v": 27, "w": 28, "x": 29, "y": 30, "z": 31, "รง": 32, "รถ": 33, "รผ": 34, "ฤ": 35, "ฤฑ": 36, "ล": 37}
my added_tokens.json:
{}
my special_tokens_map.json:
{"bos_token": "null", "eos_token": "null", "unk_token": "[UNK]", "pad_token": "[PAD]"}
my tokenizer_config.json:
{"unk_token": "[UNK]", "bos_token": "null", "eos_token": "null", "pad_token": "[PAD]", "do_lower_case": false, "word_delimiter_token": "|", "replace_word_delimiter_char": " ", "special_tokens_map_file": null, "name_or_path": "model/checkpoint-6000", "tokenizer_class": "Wav2Vec2CTCTokenizer", "processor_class": "Wav2Vec2ProcessorWithLM"}
### Expected behavior
```shell
Hi @patrickvonplaten I got the same error as you mentioned above. I did what you said but I still get an error like below. Can you please help?
ValueError: The tokens {'null'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'null'} in the decoder's alphabet.
My alphabet.json:
{"labels": ["", "", "", "\u2047", " ", "'", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "\u00e7", "\u00f6", "\u00fc", "\u011f", "\u0131", "\u015f"], "is_bpe": false}
my vocab.json:
{"[PAD]": 0, "": 1, "": 2, "[UNK]": 3, "|": 4, "'": 5, "a": 6, "b": 7, "c": 8, "d": 9, "e": 10, "f": 11, "g": 12, "h": 13, "i": 14, "j": 15, "k": 16, "l": 17, "m": 18, "n": 19, "o": 20, "p": 21, "q": 22, "r": 23, "s": 24, "t": 25, "u": 26, "v": 27, "w": 28, "x": 29, "y": 30, "z": 31, "รง": 32, "รถ": 33, "รผ": 34, "ฤ": 35, "ฤฑ": 36, "ล": 37}
my added_tokens.json:
{}
my special_tokens_map.json:
{"bos_token": "null", "eos_token": "null", "unk_token": "[UNK]", "pad_token": "[PAD]"}
my tokenizer_config.json:
{"unk_token": "[UNK]", "bos_token": "null", "eos_token": "null", "pad_token": "[PAD]", "do_lower_case": false, "word_delimiter_token": "|", "replace_word_delimiter_char": " ", "special_tokens_map_file": null, "name_or_path": "model/checkpoint-6000", "tokenizer_class": "Wav2Vec2CTCTokenizer", "processor_class": "Wav2Vec2ProcessorWithLM"}
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17180/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17179
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17179/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17179/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17179/events
|
https://github.com/huggingface/transformers/pull/17179
| 1,232,601,013
|
PR_kwDOCUB6oc43pklw
| 17,179
|
Ensure tensors are at least 1d for pad and concat
|
{
"login": "Yard1",
"id": 10364161,
"node_id": "MDQ6VXNlcjEwMzY0MTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/10364161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yard1",
"html_url": "https://github.com/Yard1",
"followers_url": "https://api.github.com/users/Yard1/followers",
"following_url": "https://api.github.com/users/Yard1/following{/other_user}",
"gists_url": "https://api.github.com/users/Yard1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yard1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yard1/subscriptions",
"organizations_url": "https://api.github.com/users/Yard1/orgs",
"repos_url": "https://api.github.com/users/Yard1/repos",
"events_url": "https://api.github.com/users/Yard1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yard1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sgugger ",
"_The documentation is not available anymore as the PR was closed or merged._",
"let me add a quick unit test",
"@sgugger I am not sure what's up with code quality CI. I cannot reformat the file it complains about locally. Any idea what could be causing this?\r\n\r\nI have removed the changes to the offending file, let's see if that fixes it.",
"Ok, CI is green now :D",
"Thanks again!"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Ensures that tensors are at least 1d in `pad_and_concatenate` utility functions, and uses `atleast_1d` methods uniformly in the entire file.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17179/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17179",
"html_url": "https://github.com/huggingface/transformers/pull/17179",
"diff_url": "https://github.com/huggingface/transformers/pull/17179.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17179.patch",
"merged_at": 1652289548000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17178
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17178/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17178/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17178/events
|
https://github.com/huggingface/transformers/pull/17178
| 1,232,560,490
|
PR_kwDOCUB6oc43pcSR
| 17,178
|
Fix typo in bug report template
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
Fix a typo in issue templates.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17178/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17178",
"html_url": "https://github.com/huggingface/transformers/pull/17178",
"diff_url": "https://github.com/huggingface/transformers/pull/17178.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17178.patch",
"merged_at": 1652387472000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17177
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17177/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17177/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17177/events
|
https://github.com/huggingface/transformers/pull/17177
| 1,232,529,169
|
PR_kwDOCUB6oc43pVbf
| 17,177
|
Update self-push workflow
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> I find it difficult to make sense of some of the changes, due to diff being hard to follow\r\n\r\nOK @stas00 , the big diff is probably because I removed unused blocks. No real big change - I just copied from scheduled CI.\r\nSee below if you would like to have a quick look. \r\n\r\nMy only questions are\r\n\r\n - why we use `options: --gpus 0` previously in `run_tests_torch_cuda_extensions_multi_gpu`\r\n - could we use `options: --gpus all` for single gpu case as well as multi gpu?\r\n\r\nprev.\r\n```\r\nimage: nvcr.io/nvidia/pytorch:21.03-py3\r\noptions: --gpus 0\r\n```\r\nnow.\r\n```\r\nhuggingface/transformers-pytorch-deepspeed-latest-gpu\r\noptions: --gpus all\r\n```\r\n\r\nand\r\n\r\nprev.\r\n```\r\n - name: Install dependencies\r\n run: |\r\n apt -y update && apt install -y libaio-dev\r\n pip install --upgrade pip\r\n pip install .[deepspeed-testing]\r\n```\r\nnow\r\n```\r\n - name: Re-compile DeepSpeed\r\n working-directory: /workspace\r\n run: |\r\n pip install deepspeed # installs the deps correctly\r\n rm -rf DeepSpeed\r\n git clone https://github.com/microsoft/DeepSpeed && cd DeepSpeed && rm -rf build\r\n DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 python3 -m pip install -e . --global-option=\"build_ext\" --global-option=\"-j8\" --no-cache -v --disable-pip-version-check\r\n```\r\n\r\n",
"thank you for highlighting the changes @ydshieh - that's super helpful.\r\n\r\n1. So for gpus:\r\n\r\n- multi-gpu should be `--gpus all` (needs 2 gpus)\r\n- single-gpu should be `--gpus 0` (must have only one gpu)\r\n\r\nso looking at the diff the original seems to be correct. perhaps not everywhere?\r\n\r\n2. For dependencies the original is correct.\r\n\r\nHave a look at what it signifies:\r\n```\r\nextras[\"deepspeed-testing\"] = extras[\"deepspeed\"] + extras[\"testing\"] + extras[\"optuna\"]\r\n```\r\n\r\nso the change is missing important dependencies install.\r\n\r\nand the new instructions aren't correct.\r\n\r\nWe only want the bleed edge (your now) install only for nightly build. self-push should use the released `deepspeed` version, that `pip install .[deepspeed-testing]` takes care of (but which of course can be moved into the docker if it's running via the docker image). If it's already there, then there is no need for that last pip call either.\r\n\r\nBottom line - no change from the original in either case logically.\r\n\r\nIf I missed something please let me know.\r\n",
"> so looking at the diff the original seems to be correct. perhaps not everywhere?\r\n\r\nThe current main branch has a job `run_tests_torch_cuda_extensions_multi_gpu` in `self-push.yml` which has `--gpus 0`.\r\nIn the latest commit in this PR, I reverted to the original version regarding DeepSpeed parts, but set `--gpus all` for multi-gpu job.\r\n\r\nRemark 1: some places in `self-scheduled.yml` have to be fixed.)\r\n\r\nRemarks 2: I checked this doc [expose-gpus-for-use](https://docs.docker.com/config/containers/resource_constraints/#expose-gpus-for-use), and think we can still use `--gpus all` even if the host machine has only 1 GPU. `--gpus 0` is necessary only if the host has multiple GPUs but we want to use only 1 of them. \r\n\r\n> 2. For dependencies the original is correct.\r\n> self-push should use the released `deepspeed` version, that `pip install .[deepspeed-testing]` \r\n\r\n~~I will change back to the original version for this part~~ (Done), thank you.\r\n",
"As long as the tests are run with `CUDA_VISIBLE_DEVICES=0` for `run_tests_single_gpu` jobs it indeed doesn't matter if more than 1 gpu is available.\r\n\r\nBut it's critical we ensure that it is set correctly, otherwise tests requiring a single gpu will get skipped. \r\n\r\nThank you for fixing where the setting are incorrect, @ydshieh!",
"> Before merging it, could you do a test run when modifying the `setup.py` to ensure that all tests are run correctly? Thank you!\r\n\r\nI had to fix a bug (i.e. when the test list is `tests`, i.e. when `setup.py` is changed).\r\nA full test workflow run is [here](https://github.com/huggingface/transformers/actions/runs/2318670278).\r\nAfter looking some failures, I am convinced that this PR is ready to be merged (the failures are the same as in scheduled CI runs).\r\n\r\nThank you for the reviews!"
] | 1,652
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
Update self-push CI workflow file:
- `tests_fetcher.py` is updated to output a json file, containing a dictionary mapping test categories to the identified test files (which is used by the updated push CI below)
- Reorganize the tests into models (e.g. `models/bert`, `models/gpt2`, etc.) and modeling categories (`pipeline`, `tokenization`), same as in scheduled CI
- `notification_service.py` and `self-scheduled.yml` are updated to use `[single/multi]-gpu` as artifact name prefixes (i.e. no more `-docker` at the end): with this minimal change, `notification_service.py` could be reused
Some workflow runs:
- [push CI](https://github.com/huggingface/transformers/actions/runs/2306332297)
- [scheduled CI](https://github.com/huggingface/transformers/actions/runs/2306421236)
Some tests failed intentionally (to verify their reports). The reports could be found on `transformers-ci-feedback-tests` channel.
### TODO:
- create new report channel and add the channel ID to the workflow file
**I added some reviews that contain some of my questions.**
@sgugger Maybe you could have a look for the changes in `test_fetcher.py`?
@stas00 Maybe for the changes regarding DeepSpeed and multi-gpu configurations?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17177/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17177",
"html_url": "https://github.com/huggingface/transformers/pull/17177",
"diff_url": "https://github.com/huggingface/transformers/pull/17177.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17177.patch",
"merged_at": 1652452080000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17176
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17176/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17176/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17176/events
|
https://github.com/huggingface/transformers/pull/17176
| 1,232,508,724
|
PR_kwDOCUB6oc43pRAp
| 17,176
|
Add ONNX support for Longformer
|
{
"login": "deutschmn",
"id": 37573274,
"node_id": "MDQ6VXNlcjM3NTczMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37573274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deutschmn",
"html_url": "https://github.com/deutschmn",
"followers_url": "https://api.github.com/users/deutschmn/followers",
"following_url": "https://api.github.com/users/deutschmn/following{/other_user}",
"gists_url": "https://api.github.com/users/deutschmn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deutschmn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deutschmn/subscriptions",
"organizations_url": "https://api.github.com/users/deutschmn/orgs",
"repos_url": "https://api.github.com/users/deutschmn/repos",
"events_url": "https://api.github.com/users/deutschmn/events{/privacy}",
"received_events_url": "https://api.github.com/users/deutschmn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17176). All of your documentation changes will be reflected on that endpoint.",
"Hey :hand: excellent PR, the code looks just fine!\r\n\r\nI wonder if you tried to specify the right `--feature` while converting your `LongFormer` model?\r\nWhich model did you try and what `--feature` did you choose?",
"> Hey โ excellent PR, the code looks just fine!\r\n\r\nThanks!\r\n\r\n> I wonder if you tried to specify the right `--feature` while converting your `LongFormer` model? Which model did you try and what `--feature` did you choose?\r\n\r\nI'm currently experimenting with [longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096). The reported difference of 3.77 is with `--feature=default`, but there are large differences with all other features as well (`masked-lm`: 14.1, `sequence-classification`: 0.04, `question-answering`: 0.25, `token-classification`: 0.19, `multiple-choice`: 0.1).\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @deutschmn, did you finally get good results with `Longformer`?",
"@ChainYo Unfortunately, I didn't get a chance to dive in further yet. I'll try to find some time, but if someone else has any ideas, please let me know.",
"Hey @ChainYo! I found some time and fixed the issues. Can we reopen? ๐ \r\n\r\nAdding support for the `global_attention_mask` was pretty easy after I tracked down the unsupported indexing lines, but it took quite a deep dive to find out where the value difference came from. There were two main issues:\r\n1. `masked_fill_` produces different results when converting to ONNX. I replaced it with a simple `where`.\r\n2. `as_strided` for chunking doesn't work either, presumably because it relies on the underlying memory layout that's different in ONNX. The perfect solution would be to use `unfold`, but unfortunately, that op is not supported. So I added a slow fallback that works in every case. Once there's support for `unfold`, we can get rid of that.",
"> Hey @ChainYo! I found some time and fixed the issues. Can we reopen?\r\n\r\nHey, thanks for iterating on this. I will ping @lewtun to open this again.",
"Thanks a lot for re-working on this @deutschmn โค๏ธ ! Ping me when you'd like a review :)",
"Thanks for reopening, @lewtun. Would be brilliant if you could review now ๐ ",
"Thanks for your reviews, @lewtun and @patrickvonplaten ๐ I worked in all your feedback and added Longformer to the ONNX tests. Slow ONNX + Longformer tests seem to work fine:\r\n\r\n<details>\r\n <summary><code>RUN_SLOW=1 pytest tests/models/longformer/test_modeling_longformer.py</code> โ 55 passed, 10 skipped, 14 warnings</summary>\r\n\r\n ```\r\n=================================================================== test session starts ===================================================================\r\nplatform darwin -- Python 3.9.10, pytest-7.1.2, pluggy-1.0.0\r\nrootdir: /Users/patrick/Projects/open-source/transformers, configfile: setup.cfg\r\nplugins: xdist-2.5.0, hypothesis-6.46.3, forked-1.4.0, timeout-2.1.0, dash-2.4.1\r\ncollected 65 items \r\n\r\ntests/models/longformer/test_modeling_longformer.py ...s.sss..................... [100%]\r\n\r\n============================= warnings summary =============================\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/image_utils.py:222: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.\r\n def resize(self, image, size, resample=PIL.Image.BILINEAR, default_to_square=True, max_size=None):\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torchvision/transforms/functional_pil.py:228: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.\r\n interpolation: int = Image.BILINEAR,\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torchvision/transforms/functional_pil.py:295: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.\r\n interpolation: int = Image.NEAREST,\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torchvision/transforms/functional_pil.py:311: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.\r\n interpolation: int = Image.NEAREST,\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torchvision/transforms/functional_pil.py:328: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.\r\n interpolation: int = Image.BICUBIC,\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/auto_augment.py:39: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.\r\n _RANDOM_INTERPOLATION = (Image.BILINEAR, Image.BICUBIC)\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/auto_augment.py:39: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.\r\n _RANDOM_INTERPOLATION = (Image.BILINEAR, Image.BICUBIC)\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:39: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.\r\n Image.NEAREST: 'nearest',\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:40: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.\r\n Image.BILINEAR: 'bilinear',\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:41: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.\r\n Image.BICUBIC: 'bicubic',\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:42: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.\r\n Image.BOX: 'box',\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:43: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.\r\n Image.HAMMING: 'hamming',\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:44: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.\r\n Image.LANCZOS: 'lanczos',\r\n\r\ntests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training_gradient_checkpointing\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torch/autocast_mode.py:162: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling\r\n warnings.warn('User provided device_type of \\'cuda\\', but CUDA is not available. Disabling')\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n========== 55 passed, 10 skipped, 14 warnings in 86.62s (0:01:26) ==========\r\n ```\r\n\r\n</details>\r\n\r\n<details>\r\n <summary><code>RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k \"longformer\"</code> โ 12 passed, 377 deselected, 228 warnings</summary>\r\n\r\n ```\r\n=========================================================================================== test session starts ===========================================================================================\r\nplatform darwin -- Python 3.9.10, pytest-7.1.2, pluggy-1.0.0\r\nrootdir: /Users/patrick/Projects/open-source/transformers, configfile: setup.cfg\r\nplugins: xdist-2.5.0, hypothesis-6.46.3, forked-1.4.0, timeout-2.1.0, dash-2.4.1\r\ncollected 389 items / 377 deselected / 12 selected \r\n\r\ntests/onnx/test_onnx_v2.py ............ [100%]\r\n\r\n============================================================================================ warnings summary =============================================================================================\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1610: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if padding_len > 0:\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torch/_tensor.py:627: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n return self.item().__format__(format_spec)\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torch/nn/functional.py:2165: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert padding_idx < weight.size(0), \"Padding_idx must be within num_embeddings\"\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1297: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n is_global_attn = is_index_global_attn.flatten().any().item()\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:565: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert (\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:832: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert (\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:835: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert query.size() == key.size()\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:785: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if hidden_states.size(1) == window_overlap * 2:\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:594: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert list(attn_scores.size()) == [\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:900: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert seq_len % (window_overlap * 2) == 0\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:901: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert attn_probs.size()[:3] == value.size()[:3]\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:902: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert attn_probs.size(3) == 2 * window_overlap + 1\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:668: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert attn_output.size() == (batch_size, seq_len, self.num_heads, self.head_dim), \"Unexpected size\"\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1072: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert list(global_attn_scores.size()) == [\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1122: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert list(global_attn_output.size()) == [\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:691: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.\r\n len(is_local_index_global_attn_nonzero[0]), -1\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1353: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if padding_len > 0:\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torch/onnx/symbolic_helper.py:719: UserWarning: allowzero=0 by default. In order to honor zero value in shape use allowzero=1\r\n warnings.warn(\"allowzero=0 by default. In order to honor zero value in shape use allowzero=1\")\r\n\r\ntests/onnx/test_onnx_v2.py: 12 warnings\r\n /Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torch/onnx/symbolic_opset9.py:2905: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.\r\n warnings.warn(\"Exporting aten::index operator of advanced indexing in opset \" +\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n====================================================================== 12 passed, 377 deselected, 228 warnings in 3599.78s (0:59:59) ======================================================================\r\n ```\r\n\r\n</details>",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I merged `main` into this branch to resolve conflicts. Gently pinging @lewtun and @patrickvonplaten for a re-review ๐ ",
"\r\n\r\n\r\n\r\n> Hey @ChainYo! I found some time and fixed the issues. Can we reopen? ๐\r\n> \r\n> Adding support for the `global_attention_mask` was pretty easy after I tracked down the unsupported indexing lines, but it took quite a deep dive to find out where the value difference came from. There were two main issues:\r\n> \r\n> 1. `masked_fill_` produces different results when converting to ONNX. I replaced it with a simple `where`.\r\n> 2. `as_strided` for chunking doesn't work either, presumably because it relies on the underlying memory layout that's different in ONNX. The perfect solution would be to use `unfold`, but unfortunately, that op is not supported. So I added a slow fallback that works in every case. Once there's support for `unfold`, we can get rid of that.\r\n\r\nHi @deutschmn, thanks for contributing! As for the tracing problem of `masked_fill_` and `as_strided`, they are both supported in [`torch.onnx.symbolic_opset9`](https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_opset9.py), have you tried interpreting the forward pass of `LongformerSelfAttention` with a `symbolic` method to apply the symbolic tracing?\r\n\r\n__REF__\r\n* [Symbolic doc in PyTorch](https://pytorch.org/docs/stable/onnx.html#static-symbolic-method)\r\n* An example: how it was done for DeBERTa\r\n\r\nhttps://github.com/huggingface/transformers/blob/df28de0581aaf6d8742c4988137caac2b6602ca8/src/transformers/models/deberta/modeling_deberta.py#L122-L137",
"Hey @JingyaHuang, thanks for your feedback ๐ I haven't looked into symbolic tracing yet. I'm travelling right now, but I'll have another look when I'm back in a couple of weeks."
] | 1,652
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR contributes to #16308 and addresses #16463 by adding support for exporting [Longformer](https://arxiv.org/abs/2004.05150) to ONNX.
The following necessary changes were already made:
- [x] `LongformerOnnxConfig` implemented
- [x] ONNX opset version >= 12
- [x] fix in model definition with `nn.functional.pad` (see https://github.com/huggingface/transformers/issues/13126#issuecomment-993645323)
However, there are still some open issues I'd need help with:
- [x] ~The conversion to ONNX fails when a `global_attention_mask` is provided that contains at least one `1`. It raises the following error: `Only consecutive 1-d tensor indices are supported in exporting aten::index_put to ONNX.`. So far, I have been unable to track down which line triggers this error. If we find it, we can probably rewrite the model implementation using this workaround: https://pytorch.org/docs/stable/onnx.html#writes-sets~ โ issue resolved by rewriting accesses
- [x] ~The validation check currently fails with a high value difference (3.77). The JIT conversion raises the following warnings. Maybe some of them are the reasons for it:~ โ tracked down and fixed
```
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1569: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if padding_len > 0:
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1256: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
is_global_attn = is_index_global_attn.flatten().any().item()
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:569: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert (
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:805: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert (
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:808: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert query.size() == key.size()
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:598: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert list(attn_scores.size()) == [
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:873: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert seq_len % (window_overlap * 2) == 0
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:874: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_probs.size()[:3] == value.size()[:3]
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:875: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_probs.size(3) == 2 * window_overlap + 1
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:669: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_output.size() == (batch_size, seq_len, self.num_heads, self.head_dim), "Unexpected size"
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1312: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if padding_len > 0:
```
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case: #16308, #16463
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? โ default Longformer and ONNX tests
## Who can review?
Maybe @ChainYo and/or @lewtun can help with this? ๐
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17176/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17176/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17176",
"html_url": "https://github.com/huggingface/transformers/pull/17176",
"diff_url": "https://github.com/huggingface/transformers/pull/17176.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17176.patch",
"merged_at": 1661409283000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17175
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17175/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17175/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17175/events
|
https://github.com/huggingface/transformers/pull/17175
| 1,232,403,060
|
PR_kwDOCUB6oc43o6gW
| 17,175
|
[M2M100 doc] remove duplicate example
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
MEMBER
| null |
# What does this PR do?
Removes duplicate translation example from doc.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17175/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17175",
"html_url": "https://github.com/huggingface/transformers/pull/17175",
"diff_url": "https://github.com/huggingface/transformers/pull/17175.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17175.patch",
"merged_at": 1652267807000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17174
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17174/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17174/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17174/events
|
https://github.com/huggingface/transformers/pull/17174
| 1,232,266,776
|
PR_kwDOCUB6oc43ohpP
| 17,174
|
logging documentation update
|
{
"login": "sanderland",
"id": 48946947,
"node_id": "MDQ6VXNlcjQ4OTQ2OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanderland",
"html_url": "https://github.com/sanderland",
"followers_url": "https://api.github.com/users/sanderland/followers",
"following_url": "https://api.github.com/users/sanderland/following{/other_user}",
"gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanderland/subscriptions",
"organizations_url": "https://api.github.com/users/sanderland/orgs",
"repos_url": "https://api.github.com/users/sanderland/repos",
"events_url": "https://api.github.com/users/sanderland/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanderland/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
See https://github.com/huggingface/transformers/issues/17094
@LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17174/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17174",
"html_url": "https://github.com/huggingface/transformers/pull/17174",
"diff_url": "https://github.com/huggingface/transformers/pull/17174.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17174.patch",
"merged_at": 1652734048000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17173
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17173/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17173/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17173/events
|
https://github.com/huggingface/transformers/issues/17173
| 1,232,155,407
|
I_kwDOCUB6oc5JcTMP
| 17,173
|
model google/muril-base-cased has effectively infinite model_max_length
|
{
"login": "AngledLuffa",
"id": 3411033,
"node_id": "MDQ6VXNlcjM0MTEwMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3411033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AngledLuffa",
"html_url": "https://github.com/AngledLuffa",
"followers_url": "https://api.github.com/users/AngledLuffa/followers",
"following_url": "https://api.github.com/users/AngledLuffa/following{/other_user}",
"gists_url": "https://api.github.com/users/AngledLuffa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AngledLuffa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AngledLuffa/subscriptions",
"organizations_url": "https://api.github.com/users/AngledLuffa/orgs",
"repos_url": "https://api.github.com/users/AngledLuffa/repos",
"events_url": "https://api.github.com/users/AngledLuffa/events{/privacy}",
"received_events_url": "https://api.github.com/users/AngledLuffa/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Thank you for the issue @AngledLuffa ! It's interesting to know that this is an important feature for you.\r\n\r\nUnfortunately we can't do much from `transformers` as `transformers` only retrieves the `model_max_length` key from the [`tokenizer_config.json` file](https://huggingface.co/google/muril-base-cased/blob/main/tokenizer_config.json). When the `model_max_length` key is not filled in, a default \"infinite\" value is filled in.",
"I tried making a topic here:\r\n\r\nhttps://discuss.huggingface.co/t/muril-base-cased-has-infinity-for-model-max-length/17838",
"Hi @AngledLuffa,\r\n\r\nI think the [new feature](https://huggingface.co/blog/community-update) that has just been deployed on the Hub might solve your problem!\r\n\r\nIt is now possible to open discussions on a hub model (or even propose a change in the tokenizer configuration!). Would you be interested in trying this out? It should ping the authors and invite them to make the change on the hub!",
"Thanks, I'll give that a try",
"Closing as the issue seems to be solved thanks to your message on the hub :confetti_ball: "
] | 1,652
| 1,654
| 1,654
|
NONE
| null |
### System Info
```shell
>>> transformers.__version__
'4.18.0'
>>> tokenizers.__version__
'0.12.1'
```
### Who can help?
@LysandreJik
@SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The muril-base-cased model has the model_max_length filled out incorrectly. For example:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("google/muril-base-cased")
tokenizer.model_max_length
>>> tokenizer.model_max_length
1000000000000000019884624838656
```
### Expected behavior
I am fairly certain the correct size is 512
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17173/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17172
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17172/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17172/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17172/events
|
https://github.com/huggingface/transformers/issues/17172
| 1,232,075,797
|
I_kwDOCUB6oc5Jb_wV
| 17,172
|
T5 zero-shot classification pipeline
|
{
"login": "peregilk",
"id": 9079808,
"node_id": "MDQ6VXNlcjkwNzk4MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peregilk",
"html_url": "https://github.com/peregilk",
"followers_url": "https://api.github.com/users/peregilk/followers",
"following_url": "https://api.github.com/users/peregilk/following{/other_user}",
"gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peregilk/subscriptions",
"organizations_url": "https://api.github.com/users/peregilk/orgs",
"repos_url": "https://api.github.com/users/peregilk/repos",
"events_url": "https://api.github.com/users/peregilk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peregilk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @peregilk \r\n\r\nT5 is not supported in zero-shot classification pipeline because it does not have a sequence classification head. With T5 the seq classification problem is formulated as text-to-text generation problem which is not possible to support in this zero-shot pipeline. ",
"@patil-suraj \r\nThanks for the answer!\r\n\r\nJust because I am trying to get a better understanding of this: Is there a fundamental difference between the output probabilities from a seq classification head and the output probabilites generated by the code at the bottom of this page: https://huggingface.co/alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli ? Or is this simply related to the way the pipelines are made?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Hi @peregilk\r\n> \r\n> T5 is not supported in zero-shot classification pipeline because it does not have a sequence classification head. With T5 the seq classification problem is formulated as text-to-text generation problem which is not possible to support in this zero-shot pipeline.\r\n\r\nHi, thank you for the answer. I am interested in using t5 for classification. Are there any examples on fine-tuning? How should we interpret class tokens in the decoder output? If there's documentation on token IDs for each classification task it would be very helpful."
] | 1,652
| 1,687
| 1,655
|
CONTRIBUTOR
| null |
### Feature request
The current zero-shot classification pipeline support models like BERT and BART, but there does not seem to be any support for T5.
I notice that the new [mT5-mnli from Alan Turing Institute](https://huggingface.co/alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli) has code for extracting output probabilites and mapping this to the mnli entailment/contradiction, but they are not able to integrate it into the pipeline.
I am not sure what is exactly missing for this to be in place. What needs to be done in Transformers and how should the output from the model be adapted?
@patrickvonplaten
@anton-l
### Motivation
The medium/large T5 models has very impressive zero-shot abilities, and the combination of finetuning on NLI-tasks ([Yin et.al](https://arxiv.org/abs/1909.00161)) and the classification pipeline is a great way to use NLP for easy classification tasks.
### Your contribution
My contribution will depend on what needs to be done. I am finetuning several models on MNLI, and can in any case contribute active in testing.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17172/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.