Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
jinaai
/
xlm-roberta-flash-implementation
like
34
Follow
Jina AI
1.76k
Transformers
94 languages
xlm-roberta
License:
cc-by-nc-4.0
🇪🇺 Region: EU
Model card
Files
Files and versions
xet
Community
60
Deploy
Use this model
refs/pr/59
xlm-roberta-flash-implementation
Ctrl+K
Ctrl+K
12 contributors
History:
64 commits
Sai-Suraj
Fixes import error for this function `create_position_ids_from_input_ids` in transformers V5.
e02c3ed
verified
2 months ago
.gitattributes
Safe
1.52 kB
initial commit
about 2 years ago
README.md
Safe
1.47 kB
Update README.md
over 1 year ago
block.py
Safe
17.8 kB
refine-codebase (#33)
over 1 year ago
configuration_xlm_roberta.py
Safe
6.54 kB
fix: set fp32 when using cpu bc bf16 is slow (#44)
over 1 year ago
convert_roberta_weights_to_flash.py
Safe
6.94 kB
Support for SequenceClassification (#7)
almost 2 years ago
embedding.py
Safe
4.44 kB
Fixes import error for this function `create_position_ids_from_input_ids` in transformers V5.
2 months ago
mha.py
Safe
34.4 kB
cpu-inference (#35)
over 1 year ago
mlp.py
Safe
7.62 kB
refine-codebase (#33)
over 1 year ago
modeling_lora.py
Safe
15.4 kB
[Fix bug] TypeError: argument of type 'XLMRobertaFlashConfig' is not iterable (#55)
over 1 year ago
modeling_xlm_roberta.py
Safe
51.1 kB
output-hidden-states (#56)
over 1 year ago
rotary.py
Safe
24.5 kB
fix: update frequencies when updating the rope base value (#40)
over 1 year ago
stochastic_depth.py
Safe
3.76 kB
refine-codebase (#33)
over 1 year ago
xlm_padding.py
Safe
10 kB
refine-codebase (#33)
over 1 year ago