Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Tsedee
/
mongol-editor-llm-v2

Text Generation
PEFT
Safetensors
Mongolian
mongolian
subtitle
asr-postprocessing
text-correction
lora
conversational
Eval Results (legacy)
Model card Files Files and versions
xet
Community

Instructions to use Tsedee/mongol-editor-llm-v2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • PEFT

    How to use Tsedee/mongol-editor-llm-v2 with PEFT:

    from peft import PeftModel
    from transformers import AutoModelForCausalLM
    
    base_model = AutoModelForCausalLM.from_pretrained("/workspace/qwen35-4b-claude")
    model = PeftModel.from_pretrained(base_model, "Tsedee/mongol-editor-llm-v2")
  • Notebooks
  • Google Colab
  • Kaggle
mongol-editor-llm-v2
720 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 3 commits
Tsedee's picture
Tsedee
Complete model card: training details, eval, usage, limitations
71425d7 verified about 1 month ago
  • final
    V2 LoRA trained (loss 0.78, 3 epochs on augmented data) about 1 month ago
  • .gitattributes
    1.63 kB
    V2 LoRA trained (loss 0.78, 3 epochs on augmented data) about 1 month ago
  • README.md
    13 kB
    Complete model card: training details, eval, usage, limitations about 1 month ago
  • adapter_config.json
    1.06 kB
    V2 LoRA trained (loss 0.78, 3 epochs on augmented data) about 1 month ago
  • adapter_model.safetensors
    340 MB
    xet
    V2 LoRA trained (loss 0.78, 3 epochs on augmented data) about 1 month ago
  • chat_template.jinja
    4.05 kB
    V2 LoRA trained (loss 0.78, 3 epochs on augmented data) about 1 month ago
  • tokenizer.json
    20 MB
    xet
    V2 LoRA trained (loss 0.78, 3 epochs on augmented data) about 1 month ago
  • tokenizer_config.json
    1.17 kB
    V2 LoRA trained (loss 0.78, 3 epochs on augmented data) about 1 month ago