Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

jinaai
/
xlm-roberta-flash-implementation

Transformers
xlm-roberta
🇪🇺 Region: EU
Model card Files Files and versions
xet
Community
59
xlm-roberta-flash-implementation
186 kB
  • 11 contributors
History: 64 commits
jupyterjazz's picture
jupyterjazz
Sai-Suraj's picture
Sai-Suraj
Fixes import error for this function `create_position_ids_from_input_ids` in transformers V5. (#59)
cd915ad about 22 hours ago
  • .gitattributes
    1.52 kB
    initial commit almost 2 years ago
  • README.md
    1.47 kB
    Update README.md over 1 year ago
  • block.py
    17.8 kB
    refine-codebase (#33) over 1 year ago
  • configuration_xlm_roberta.py
    6.54 kB
    fix: set fp32 when using cpu bc bf16 is slow (#44) over 1 year ago
  • convert_roberta_weights_to_flash.py
    6.94 kB
    Support for SequenceClassification (#7) almost 2 years ago
  • embedding.py
    4.44 kB
    Fixes import error for this function `create_position_ids_from_input_ids` in transformers V5. (#59) about 22 hours ago
  • mha.py
    34.4 kB
    cpu-inference (#35) over 1 year ago
  • mlp.py
    7.62 kB
    refine-codebase (#33) over 1 year ago
  • modeling_lora.py
    15.4 kB
    [Fix bug] TypeError: argument of type 'XLMRobertaFlashConfig' is not iterable (#55) about 1 year ago
  • modeling_xlm_roberta.py
    51.1 kB
    output-hidden-states (#56) about 1 year ago
  • rotary.py
    24.5 kB
    fix: update frequencies when updating the rope base value (#40) over 1 year ago
  • stochastic_depth.py
    3.76 kB
    refine-codebase (#33) over 1 year ago
  • xlm_padding.py
    10 kB
    refine-codebase (#33) over 1 year ago