Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

jinaai
/
xlm-roberta-flash-implementation

Transformers
xlm-roberta
๐Ÿ‡ช๐Ÿ‡บ Region: EU
Model card Files Files and versions
xet
Community
58
xlm-roberta-flash-implementation
1.12 GB
  • 10 contributors
History: 40 commits
Jackmin108's picture
Jackmin108
fix: adapter masks
934939f over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • README.md
    147 Bytes
    add mlm model and adjust naming over 1 year ago
  • block.py
    17.8 kB
    fix: adapter masks over 1 year ago
  • config.json
    980 Bytes
    Rename config to config.json over 1 year ago
  • configuration_xlm_roberta.py
    2.88 kB
    change rotary base (#31) over 1 year ago
  • convert_roberta_weights_to_flash.py
    6.94 kB
    Support for SequenceClassification (#7) over 1 year ago
  • embedding.py
    3.74 kB
    2-adapter-tuning (#29) over 1 year ago
  • mha.py
    33.2 kB
    fix: adapter masks over 1 year ago
  • mlp.py
    7.39 kB
    fix: adapter masks over 1 year ago
  • modeling_lora.py
    13.4 kB
    change rotary base (#31) over 1 year ago
  • modeling_xlm_roberta.py
    54 kB
    fix: adapter masks over 1 year ago
  • modeling_xlm_roberta_for_glue.py
    4.45 kB
    Update modeling_xlm_roberta_for_glue.py over 1 year ago
  • pytorch_model.bin
    1.11 GB
    xet
    add mlm model and adjust naming over 1 year ago
  • rotary.py
    22.9 kB
    change rotary base (#31) over 1 year ago
  • stochastic_depth.py
    3.76 kB
    add stochastic_depth over 1 year ago
  • tokenizer.json
    9.1 MB
    upload model over 1 year ago
  • tokenizer_config.json
    75 Bytes
    Update tokenizer_config.json (#14) over 1 year ago
  • xlm_padding.py
    10 kB
    2-adapter-tuning (#29) over 1 year ago